 All right. Welcome, everyone, to today's Google Search Central Office Hours Hangout. My name is John Mueller. I'm a search advocate at Google. And part of what I do is these office hours where people can join in and ask questions around their website and web search, and we can try to find some answers. And, wow, lots of people joining today. Lots of questions submitted. This is good. Cool. Bunch of things were already submitted. But if any of you want to get started with the first question, you're welcome to jump on in. Hi, John. Hi. I have actually two questions. The first one is about changing website. So one of our clients, they change their website. They move the website from WordPress to Drupal. Fortunately, we are keeping the same URL, like WordPress. So the URL structure will not be changed. The only thing they are going to change is the design of the site and content. They are going to add more content. Now, the question is the ranking they have for those who are at this moment. If they make this content change and design change, will that affect the ranking? It can. Yes. I mean, anytime you change your website, then that's something that should be reflected in your rankings, right? So if you add text to your pages, if you remove text, if you change the internal linking, if you change the layout so that you have different things as headings and titles, then that's something that should be reflected in search. So keeping the URLs the same, I think, makes it a lot easier, because we just can keep a lot of those signals already. But we still need to look at the content, like what is on these pages, how they're linked between each other. And it's not the case that there will always be a negative change. A lot of people do these kind of migrations because they want to improve things. Because it's like, well, I noticed my internal linking is bad with my old CMS. And I moved to a CMS that does internal linking a lot better, so that helps me to kind of get my site crawled better, for example. And the next question is about backlink. So how does Google measure the weight of a backlink? Some people get business listing. Some people get backlink from the content. Some people get backlink from the regional site, like .com.au. Some people get backlink from .com site. Does Google differentiate that, OK, these are Australian site. They're getting backlink from .com.au. We are giving more importance because they are getting .com.au. Or it's the same getting backlink from .com and .com.au. And what about the other type of backlink, business listing, or getting backlink from content? That gets really complicated. So I mean, what we wouldn't differentiate is just because they're different top level domains. So an AU site and a .com or UK site or whatever, it's not that we would say they have different weights or individual types of links. But we do try to differentiate by the type of link that it is and to figure out what is its importance with regards to the context that it provides. Is it something that tells us a lot about the page that it's linking to, or is it something that is more like, oh, by the way, we're also here, kind of thing, which is more like, well, OK, we saw it, and that's OK. Thank you, John. Hey, John. Good morning. Morning. I have a question about archiving. So we have a site with a large number of press release pages. These are quite old, right, maybe from 2015 and maybe even older. And the thing is they have accumulated a large amount of backlinks, but they don't get any traffic. But these are quite high authoritative domains according to them. But now we want to clean them up because they are taking up the real estate, or at least it becomes a bit too big there. Now the thing is, so I was thinking of, OK, we can move it to an archive. But I still would like to benefit from the SEO power these pages have built up over time. So is there a way to do this cleverly? So basically, so moving them indeed to an archive, for example, sub-directory, but then still, let's say, benefit from the SEO power. Yeah, I mean, you can just redirect them to a different part of your site. If you have an archive section for this kind of older content, which is very common, then kind of moving the content there and redirecting the URLs there, that essentially tells us to forward the links there. But we were also thinking, because a lot of them so they need to be there still for, let's say, legal purposes, for example. But I still would like to say, OK, the crawler doesn't need to access these folders for me all the time. So I was thinking about disallowing that folder. But will disallow a sub-directory? Will this also mean that the build up SEO power will be ignored from that point onwards? Because in a sense, I was thinking like this, what you say is to the crawler, hey, you don't have to crawl this folder anymore. But I don't think we say ignore the build up SEO power, or do I see this wrong? So probably we would already automatically crawl less if we recognize that they're less relevant for your site, not the case that you need to block it completely. If you were to block it completely with robots.txt, we would not know what was on its pages. So essentially, when there are lots of external links pointing to a page that is blocked by robots.txt, then sometimes we can still show that page in the search results. But we'll show it with a title based on the links and a text that says, oh, we don't know what is actually here. And if that page that is being linked to is something that is just referring to more content within your website, we wouldn't know. So we can't indirectly forward those links to your primary content. So that's something where if you see that these pages are important enough that people are linking to them, then I would try to avoid blocking them by robots.txt. The other thing kind of to keep in mind also is that these kind of press release pages, things that collect over time, usually the type of links that they attract are a very time limited kind of thing where a news site will link to it once. And then it's like when you look at the number of links, it might look like, oh, there are lots of links here. But these are really old news articles which are in the archives of those news sites, essentially. It's also kind of a sign, well, they have links, but those links are not extremely useful because it's so irrelevant in the meantime. Thanks so much. I appreciate your time. Thanks. Hey, hi, John. I want you to know something very basic about copied content. So as we all know, the copied content is against Google guidelines and is also not a good practice in general. So I just want you to know how Google determines if a particular piece of content is copied or not, and who is the actual original source for that content. I'll determine if it's copied. I think it's kind of tricky in some aspect. In some aspect, it is really easy because if you take a piece of text and you search for it, then if exactly the print text is on the web or on other pages, then that's a pretty good sign that this is copy content. But it's also something where it would not hide a default thing that copies it. So for example, if you have copied content that is more along the lines of boilerplate text, like you have, I don't know, a legal disclaimer on the bottom of your site, which is something that you have across all of your pages of your site. And technically, that's copied content. But practically, for us, that's not really an issue because these are things that people are generally not searching for. It's not that they're searching for the legal disclaimer and they want to find your site. It's more they're looking for your primary content. And in that regard, it's something where we try to kind of weigh the copied content appropriately, but also kind of still look at the rest of your site. So I don't know actually if that helps with regards to your question. So that's kind of like the thing is like it's easy to recognize that there is copied content on these pages, but it's hard to figure out what we should do about that copied content. Regarding the author or the owner of that content, I don't think we go and make any judgment in that regard because that's really tricky. Like we can't determine who is the owner. And sometimes the person who wrote it first is not the one, for example, that is the most relevant. So we see this a lot of times, for example, with our own blog posts, where we will write a blog post and we'll put the information we want to share on our blog post. And someone will copy that content and they will add a lot of extra information around it. It's like, here's what Google really wants to tell you. And it's like reading between the lines and the secret to Google's algorithms. And when someone is searching, maybe they want to find original source. Maybe they want to find kind of this more elaborate, I don't know, exploration of the content itself. So just because something is original doesn't mean that it's the one that is the most relevant when someone is looking for that information. All right. Thank you for the long explanation. All right. Anything else before we jump into the questions? I submitted my question, but I can jump in now if that's fine. So if you recall, I'm checking in again about our domain migration. We migrated our domain at the beginning of August. Had almost total traffic loss after connecting in a few of these sessions. It was indicated that Google Engineering was able to make a partial fix for us. That was towards the end of September. And then we did start to see slow, but some recovery. We were weak in our way to almost 50% after about four months and right before the broad core update. And now we dropped to basically back where we were post domain migration. So my question is, if you think it's possible that the partial fix that we were able to get via the engineering team possibly got reverted or just otherwise how we should try to make sense of what happened with the broad core update. Or of course, trying to look through that lens just neutrally and try to think about what we can do to improve the site. But with where we were before the domain migration loss and where we are now, it's a little hard to not think that we're still just dealing with domain migration issues. And if that seems likely to you, and any advice from here? I don't know. I think it's kind of awkward that these kind of issues pop up with domain migrations. Sorry that you're struggling with this. I think with the issue that you're seeing now, I don't think that would be related to kind of the fix in the domain migration being reverted or kind of being blocked. My assumption there is that it's more that the general core update is also affecting your site. And this combination of those remaining lingering issues plus the core update is essentially what you're seeing there. So I will definitely take a look with the team to see if there's something more specific that's going on there that we need to watch out for. So that's now, I don't know. So one of the things that we ended up doing specifically after running through your issue is trying to figure out how many other sites might be affected by these kind of site migration issues. And it seems like it's extremely rare, which doesn't really help you that much. I don't know. It's like a special case kind of thing. But at least that a little bit calming on my side, but it's something where I still think we should be able to do a better job handling these kind of migrations and not run into issues where people have to contact us essentially personally with regards to kind of, I don't know, getting things back to normal again. But so if I remember correctly, your site is about the kind of university reviews. Yeah, it's mostly focused on online education. Yeah. I could imagine that the core update is something that you might want to look into there. It's like the post that we have. Yeah. And I know there are a bunch of other SEOs that have kind of dug into all of those details of what you need to watch out for with expertise, authority, trust, those kind of topics. So I would definitely look into that. But usually when it comes to core updates, it's something where it's not just one small thing that you can fix and it gets better. It's more of a long-term approach that you have to kind of look at there. Yeah. Yeah, we're doing that internally. And we've been working with Marie Haynes and Glenn Gabe and some other SEOs. It's hard to not feel like it's our domain migration that is still having an extremely large impact on us. And I'm honestly not sure what else to do except to check in here when I can. No, I mean, it is perfectly fine to keep pushing for this. Because I mean, I try to watch what is happening with those discussions internally. But it's also good to see kind of from the external side, like, oh, it's still a problem. So we still need to look for it. Yeah. OK. Thank you. I'll definitely double check, though. Specifically around the core update, if there's something that might be kind of uniquely happening with your site, that maybe we need to fix in that regard. OK, thank you. Sure. OK, let me run through some of the submitted questions. And if you have any questions along the way, feel free to jump on in as well. In Search Console, we noticed that our pages are marked as crawled, currently not indexed. Looking into the detail, we see that Google has found our user declared canonical but then picked a completely different unrelated page for its Google-selected canonical. Any idea why this might be? So usually, this kind of situation is something where I'd need to look at the details. So it's a bit tricky without having any URLs there. One thing you could perhaps do is post in the help form and try to provide some details, some examples there so that the experts there can take a look and maybe give you some tips or escalate that to the search team if needed. One of the places where I have seen this happen is when a website is set up in a way that it's hard for us to tell which parts of a URL are actually relevant. So that could be, for example, if you have a number of different websites that all have the same backend system and the same URLs lead to the same content across the different domains, then it could be that our systems say, well, actually, the domain name isn't so relevant here. We will pick any one of these domains as the canonical for this kind of URL pattern. It could also be that you have a URL structure that maybe something in the path or in the URL parameters is deemed as being irrelevant to our systems. In that sense, it's not irrelevant in that the content is bad kind of thing, but it's irrelevant in that when we look at a number of different parameters, we see always the same content, and therefore our systems kind of learn over time that actually this URL pattern is something that doesn't lead to unique content, but rather we can pick one of the URLs that is kind of in this group of URL patterns and pick that as a canonical. And that happens regardless of what the actual content is on individual pages once we've learned that pattern. So that makes it a little bit easier for us to kind of deal with canonicalization without having to crawl the whole site and get stuck in all of these URL patterns that are essentially leading to the same content. And in a case like that, it could happen that you have a URL structure that, for a large part, leads to similar content where we get confused, but there are individual pieces in there which does do lead to unique content. And then it could happen like this that our system say, well, all of this is irrelevant. We will pick one here, and we don't realize that actually, well, this one here is kind of an exception. So if you think that might be the case in your situation, one thing you can do is try to figure out, do you perhaps have this kind of pattern of URLs that all lead to the same content? You could do something like look at your server logs to see which URLs are being crawled. And based on those, try those URLs out in your browser and see, is this something that is all leading to the same content? Is Google somehow getting stuck with URL parameters or with crawling of the website? And based on that, you can work to kind of exclude the discovery of these irrelevant URLs and focus on the ones that you really do want to have in. I noticed that often when we publish a new article on our site, the article's ranking starts to improve as time goes by. For example, in this case, it took five months from the date of publication of the article to reach a stable ranking. This happened despite the fact that the article has not changed in five months. It's the same from the beginning. So I would expect the article ranking to be the same right away. Obviously, this is not the case. So what happens on Google's side in the time between publication of a new page and its stabilization and ranking? I didn't take a look at that specific example, but it is very common that it takes some time for our signals to kind of settle down and to better understand what is on a specific page. And this is something that is very visible. For example, if you have a new website, then you might publish that new website. And in the beginning, Google might go, oh, it's a new website. We found it. Like if someone is explicitly looking for it, we'll probably show it. But over time, over a couple of months, we will try to understand, well, what is the relevance of this new website? In which regards is it important for the web? And then we'll show it maybe for more and more queries over time. And that's not something where our systems, I don't know, it's essentially just something that takes time for us to collect all of the signals that are associated with these pages. And it's not something that can happen from one day to the next. And it can also happen for individual pages as well. So it's not just specific to new websites like that. For some queries, I see Google ranking websites which have a high amount of ad content, which is creating a poor search experience. How is Google dealing with those sites? It's hard to say without having examples. But there are various things that we do take into account with regards to the user experience side of things. So on the one hand, we did, I think, a couple of years ago, an update where the above the fold content is something that we weighed a little bit more. So that's something where if there's a lot of ad content above the fold, then perhaps it might be affected by that. There are other updates that have happened in the past with regards to speed. There's the Core Web Vitals, which is going to be launched, I think, in May with regards to ranking in search results, which also helps there. But the other thing also to keep in mind is we use a lot of different factors to determine ranking in the search results to try to understand what is relevant for users in individual times. And it can very well happen that a page is extremely relevant in some regards, but still has a really bad user experience, and we will still show it in the search results. And sometimes we show it highly in the search results. So just because a page has a bad user experience, and if we were to take that bad user experience into account, it doesn't mean we would never show that page. This is something that is really common. For example, if you search for a website's name, then you would expect to find that website. Even if that website is kind of like doing weird things, has a really bad user experience, you would still expect to find that website. And there is a broad range of different kinds of queries and kinds of understanding of relevance that flow into things like that. So with that said, it's always possible to find these kind of pages in the search results. It's extremely rare for us to manually go in and say, we will completely remove this website from search, so it never shows up for any queries. We usually reserve that specifically for cases where the whole website is essentially irrelevant, where the whole website is just scraping content from the rest of the web, and there's nothing unique of value at all on the website. Then that's something where the web spam team might go in and say, this is a pure spam website. There is nothing of value here. Then we'll remove that from search. But for everything else, it's kind of like, we can't still show it. And in some cases, the other factors play a larger role. I think this is also really important because a lot of people don't know everything that they should be doing on the website. They don't know all of the details of what is important or what they should not do. They don't know those tricks that they heard from friends. Are they really bad, or are they just kind of bad, or do they kind of work sometimes? And they end up doing lots of weird things. And all of these websites that do things which are kind of suboptimal, where as an expert, you might look at it and say, oh, they should not be doing this. This is so clearly black hat against the webmaster guidelines. They might not know, and they might be a legitimate small business, and they just have their website like that. And in cases like that, I think we should definitely still continue to show that website. It's not that it's completely irrelevant for users. Maybe they're just doing things in ways that they don't know any better. Let's see. Updating the design of a website plus a few URL changes. Content remains largely the same. But getting a few updates when necessary, there's about 2,000 URLs. Given the core update is still rolling out, should I hang on for a week or push the Go Live button now? I would just move forward. So the core updates that we do tend to be focused more on kind of like broader changes in the search results. And it's not something where if you make a change right at the same time, then it suddenly clashes and breaks anything. So in that regard, I would not hold off and kind of artificially hold myself back. If you're making improvements on your website, and usually if you change things like this across the site, then I would classify that as improvements. And I would just move forward and do those improvements so that they're live as early as possible. Thanks, John. Cool. Let's see. We changed our website's domain name. The old one's search ranking was decreased to zero. However, the new one's search ranking is much lower than the old site's ranking. This reminds me of a different question. However, let's see. Some experts said it takes up to six months to achieve original rankings. How can we speed up the SEO results? Some backlinks link to the old site. Do we need to change the links to a new site? Now, for the most part, I think one of the things I would strongly recommend is just making sure that you're following the guidelines that we have, specifically for site moves, the details as much as possible, especially if this site move is still fairly fresh, then making sure that you have things like all of the redirects set up so that all of the old content is really moving to the new site. That's something I would do. I would double check all the settings in Search Console so that you have all of that lined up. But in our guidelines, we also mentioned the links to the site. And we do mention that it does make sense to get those updated as well. In particular, with the links to the site, we use that for canonicalization. So if we're not sure if we should show the old one or the new one, then if we see that there are lots of links to one of these versions, then that helps us to try to make that decision. It sounds like canonicalization is less an issue there if you are seeing that there's already no traffic going to the old domain. But still, I think updating some of those links where you can make sense, where you're going to run into the same kind of issue on our side as well when we did the migration to Google Search Central from Webmaster Central. Because I think the Twitter pages or the YouTube main account pages, they don't redirect for whatever reasons. So we're also kind of discussing internally if we should ask people to help update the links, which is kind of weird. But I guess everyone has to do that. In general, it can take a while for things to settle down after site migration. But it should, for the most part, be fairly straightforward. The one situation where I have seen issues with regards to site migrations, well, I guess the more common one where I've seen issues is when the new domain name, the one that you're moving to, has a really weird history associated with it, then that's something where it can get tricky for us to understand what does this migration mean. So in particular, if the old domain used to be hosting a lot of web spam content, then, the new domain is hosting a lot of, or was hosting a lot of web spam content, and you move there, then we have all of these kind of spammy signals that are associated with the domain name that you move to, plus the new signals from your old content. And that mix is sometimes hard for us to figure out what does this mean. So that's something that you might want to look into. In general, if you're moving to a domain, if you're starting a website on a new domain, looking at the domain's history is something I'd still recommend doing. There are lots of things that you can clean up with regards to kind of taking on a used domain name, but some things are a lot more time intensive than others. So in particular, if a website was deeply involved in web spam and has a lot of really problematic links from across the web, then that's something that you might want to clean up ahead of time, which takes a lot of time. Another situation where you might want to kind of keep in mind that it will take a bit of time is if the domain that you're moving to used to host a lot of adult content, then we might have that domain in our safe search list, essentially. And as we see that your content is now hosted there, then that will update automatically. But it can take a bit of time for us to kind of switch from, oh, we should be filtering this domain using Safe Search to this is actually a completely normal domain with traditional content, let's call it. So those are kind of the situations that I would watch out for. A subdomain, like m.domain to www.domain.com, the site is now responsive, and the mobile version is no longer needed. Should I use a change of address tool and search console? You can use it, you don't need to use it. So the change of address tool in particular helps us when you're moving to a new domain name, because then we can forward those kind of more domain level signals a little bit easier than if we had to do that on a per-url basis. But if you're moving just between subdomains within the same domain, then that's not something you'd need to do that. It also doesn't cause any problems. So if you've done that in the past, and you're like, oh, no, I should revert it, don't worry about that. We're exploring web stories and using Tappable, which we have embedded in our site. When I test the URL in the Inspect URL tool, it doesn't show up. What can we do about it? I don't know what Tappable is, so it's hard for me to say what exactly to do there. Web stories are based on AMP pages. So for web stories, you would use the AMP testing tool to see that they work well, that the structured data there is recognized properly. So that might be something to check out. With regards to whether or not these URLs are indexed, web stories by design are normal AMP pages, like I said. So they can get indexed individually. And they're set up in a way that they're canonical AMP. That means it's not like AMP pages that are associated with the rest of your website where you have the canonical as the traditional HTML page and the AMP HTML alternate version. For web stories, you only have the AMP version. And essentially, that can get indexed like any other normal content. We currently don't show web stories in all places, all countries, I think. But we do show AMP content, pretty much, I think, in all countries. So if you have a web story, what will happen is it might be shown in these web story specific surfaces if you're in one of those countries. But if you're not in one of those countries, it'll be shown in the normal web search results. So it'll be shown just like any other web page might be. Since it's also a normal web page like any other, you need to think about the usual SEO aspects there, too. So things like titles and headings and text on the page, all its attributes for images, all of the usual things, they all apply to web stories. It's a bit trickier with web stories because traditionally, you want to make it visually impressive, which means you don't have this big box of text on these web stories. But you still need textual content so that we can rank those pages properly. Question about AMP. My website is 97% traffic is from desktop users. Should I need to be worried about AMP? Because in the Search Console, I can see some AMP errors on my website. Is it worth spending time and effort on to fix those issues since the majority of the traffic is from desktop? Will it impact ranking in the long term? So AMP pages are responsive. So essentially, you can use them on desktop, too. It's not something that is purely mobile. There are lots of sites that use AMP as their primary framework for making content, and that's perfectly fine. So that's something where just because you have a lot of desktop users doesn't mean you should not use AMP. For example, I think the AMP.dev site is completely written in AMP. And if you look at it in a desktop browser, it looks like a normal website. With regards to AMP errors, what happens with AMP pages when we recognize there are errors on these pages is we understand that they're not valid AMP pages. And that means that those pages would not be kept in the AMP cache and would not be treated as AMP alternates. If you have kind of the setup with the traditional HTML on the AMP alternate pages, if you have errors in the AMP version, then we would not show those AMP alternate URLs in the search results. So it's not that there would be a ranking effect from that, but more that you're not getting the full value out of your AMP pages because we can't put them in the cache. We can't serve them immediately when people click on them. It's more of a usability kind of thing. It's like you're already doing a big part to make those AMP pages work, but they're not completely valid, so we can't actually use them as AMP pages. By mistake, I created a lot of tags and sometimes of the same name with the main category. It ends up that the tag pages score better in the search results than my categories, with the use of rel canonical help so that I can send the juice back to the main category. How long will it take until this change is implemented? So sometimes it can happen that you have a lot of different pages on a site and the wrong ones end up ranking. And cleaning up the site structure makes it a lot easier for us to pick out which pages you really want to have shown in the search results. So cleaning up the URL structure means, on the one hand, removing or reducing the amount of internal links to those kind of pages. And on the other hand, using technical elements like the rel canonical or setting up redirects to point at the pages that you do want to have shown in the search results. So in that regard, if you're kind of seeing that you have a bunch of similar pages and the wrong ones are ranking, then cleaning that up definitely helps. It can improve the ranking of those pages because if we can concentrate all the value in fewer pages, then we can make those pages a little bit stronger in search. They can be a little bit more relevant. With regards to how long that takes, it's really hard to say. In particular, we have to crawl these pages. And on a per URL basis, we have to understand, oh, this is the same content. You have a rel canonical here. We should switch indexing to the other page. And on a per URL basis, that takes a little bit of time. The crawling itself also takes a little bit of time. And crawling is not something where there's a fixed time frame involved. It can be that we re-crawl some pages within, I don't know, a couple of days, other pages within a couple of weeks. So that's something where it's really hard to say how long it will take. Accessibility question, we're going to make invisible headings for some sections of our pages. For example, main navigation kind of highlighting, how will Google understand that this is not cloaking? So we probably would not understand that this is not cloaking. But this kind of hidden text is something that is super common on the web. And we essentially just deal with that. So that's less of an issue for us. Even if we were to recognize that this is kind of hidden text on a page, it's not that we would say, oh, this is a spammy page because it has hidden text on it. But at most we might say, oh, well, maybe this piece of text here is not super relevant for this page because users never see it. So in your situation, if you have main navigation as kind of a heading for people who are using kind of voice browsers, essentially, then people are probably not going to search for main navigation on Google to try to find your site. They're going to look for your actual content, which is something that would be visible there. So in that regard, even if Google were to recognize that this is something that is less relevant for your site, it wouldn't negatively affect the rest of your site. We're facing consistent impression spikes on Google Webmaster tools. Oh, no, that's a really old tool. You should switch to Search Console. Usually, we average between 2,000 and 3,000 per day. On Fridays, it goes up to 10,000 on certain keywords. Any suggestion on what could be causing the impression spikes? It is really hard to say. On the one hand, this could be something that users are just searching more on some days. And this is really common depending on the type of website that you have that you see kind of like a weekly cycle through the number of impressions that are taking place for your site where you have either on the weekends a lot of impressions or during the week a lot of impressions or during the daytime or during the evening. These kind of cycles are really common. So perhaps you're seeing something there. What I also sometimes see is that some of the rank checking tools leave behind traces in Search Console, and you also see kind of side effects like this. For the most part, these are things that we do filter out in the rest of our pipeline, but sometimes they just linger on in Search Console a bit like this. We're about to run an A-B test for meter paywall with different variations, so each variation and its paywall schema. We're embedding the roadblocks through a third-party tool. We can't know which versions are aimed at which users because it's on a fly. Is there an option to fit only one paywall schema for all the different tests? Sure. So the usual setup that people do, well, at least the setup that I've seen, is that they essentially provide this paywall markup for the version that or in a way that matches what users would always see. So if there is, for example, a kind of a lead-in that you would always show all users and some users see more content, some users see less content, or some users can see more pages, or some users see the paywall immediately, then you kind of add the paywall markup in a way that matches what everyone would see, kind of like the lowest common denominator there. And that way, we can understand that. And it's not the case that we need to have the 100% exact, correct paywall markup on all of these pages if we can recognize that there is a paywall markup there, then that already helps us to understand that, oh, you're implementing a paywall, and it's different by user type, or by location, or whatever you use, kind of as parameters. And that's totally fine for us. How do you test the structured data for specifically the paywall structured data? Essentially, you would use a rich results test, like with any other kind of structured data. I think the tricky part with some of these paywall implementations is that Googlebot, of course, needs to be able to see the full content so that we can understand what it is that we should be showing your site for. And with that, we should be able to see the paywall markup as well. So if you're showing the paywall markup, depending on kind of like the user there, keep in mind that probably users don't need to see the paywall markup. It's really important that we see it, though, especially if you're serving Googlebot the content and not showing it to users in some cases. Let's see. Across so many sites, about once a week, I get a notification from Search Console that data vocabulary that org has been phased out. It's no longer actual issue for any of the sites, so I'm curious how long Google will store and process the HTML for. Some of the pages that pop up had their last crawl up to three years ago, and even on the HTTP versions of the site, which have been redirected to HTTPS for even longer. I don't know, like once a week, that sounds pretty excessive. I thought we just sent out that notification like once for all sites that were using it. So if you're watching this and can send me some details of what you're seeing, like that would be really useful. Because this notification is something that we should be doing just once for the site. Usually with these kind of notifications, we focus on what we last had indexed for those sites. So it's less a matter of when we last crawl those, but rather like, oh, we know that this markup used to be on your site or is on your site based on the last version that we have indexed for your site, and then we'll notify you about that. And like I said, this should be something that should be a one-time thing per site rather than a weekly notification. And it's mostly just meant to let you know that it's like, by the way, if you still have this here, you should probably switch it out. It shouldn't be something that's super annoying. But if you can send me some details, then I'm happy to take a look to see what might be happening there. I recently tweaked the meta title and description to improve quality of the words and made it more fresh, as I wanted updated for a while already. After I updated it, for one of the search queries, I noticed my site is being indexed with just that one word, that keyword search, that makes me worry that the rank might never improve again, as I recently saw a drop in ranking for that particular keyword before the tweak. Can you explain why Google Search can rewrite a meta title up to an extent where it only leaves one word as a meta title? I suspect you're seeing something very specific, so it's really hard to say. But it is something where we do sometimes change the titles of pages, in particular, when we can recognize that the title on a page is perhaps not the most relevant one to show to users based on their current query. So based on the query, we can show different titles based on the kind of title that you have specified on your site. We might show a different title. For example, if you have the same title across a lot of different pages, then we might assume, well, maybe this title is not so relevant for any individual page. Or if you have something really common and shared on your title, like you just have home for your home page, then we wouldn't show home as a title. We would try to figure out, well, what is this home page about, and try to show something from that as a title. And that's something, when we change the titles like that, it's not a sign that we're changing the rankings, that the site is not ranking for what it has and its titles. It's more that we're just saying, well, the title that you have on your page is not something that encourages a lot of people to actually click through. So perhaps we should show something that is more relevant for your individual pages. If you're seeing this very often across pages on your website, then I would take a step back and try to look at the titles that you have specified on your site. Also the descriptions, maybe, if you're seeing it with the snippet. And think about what you could be doing perhaps differently to make it a little bit clearer what actually is a useful title here for users. So a really common example that I see where we do sometimes rewrite titles is if you just include a big collection of keywords. And you say, oh, I want to rank for these keywords. I will put them all in my title. When we look at that, we say, well, this is like a collection of words. It's not a description of a page. So maybe we should rewrite that. And similarly with descriptions and snippets, it's kind of the same thing. If we recognize that this snippet is essentially just a bunch of variations of kind of like SEO phrases you want to rank for, then we'll try to pick something either from the snippet that you have there or from a part of the page. And we'll try to show that to users when they search. Just a quick follow up on that. When Google starts rewriting titles, is that kind of a good sign that you should make them and try to improve them? And that will actually help you become more relevant since Google apparently doesn't think you're relevant enough. I don't know if it would help you become more relevant. But I think what it would help you definitely immediately in kind of the short term is that more likely we will show the title that you have on your page. And if that title is better in the sense that it matches what users are looking for, it matches their intent, then that gives you more traffic. And then indirectly over time, maybe those users recommend you and your site improves in ranking overall. But it's not so much that we would say, oh, this title is now good enough to show, therefore the website is better. No, I'm thinking in cases like, as you mentioned, maybe the title is home or something like that. And the title does play a role in how relevant that page is. And you change it to red widgets, I don't know, something that is more relevant, then shouldn't that lead to the page becoming more relevant for red widgets since you're now actually using those words? Sure, sure. I mean, but that's kind of the, I would almost say less common situation. The more common one I see is someone where the title is red widgets, blue widgets, green widgets, all widgets. It's like widgets, plural, singular kind of thing, like a title like that. And if you just change it to red widgets, then it helps us a little bit. But just including all variations of all keywords there doesn't make a page relevant automatically. OK, and just to be sure, when you're changing the title, does that mean you're actually ignoring whatever is in the HTML code per se? I mean, you're completely ignoring the user-generated title tag when you're not. No, it's really just the title that we show in the search results. So it's not that we say, oh, we will skip this. We will not use it for ranking. It's just we don't show it for this particular query because it also depends on what people are actually searching. Sure, yeah. John, John, is it query agnostic then? I mean, does it change on the fly? So can't page titles and meta descriptions change to improve what? If you like search result diversification, so they pick something out and change the actual page title and meta description to sort of vary the results, I don't know. I don't know if we would do it to vary the results, but we do try to use it in a way to better explain what a user would expect when they go there. So I don't think. It wouldn't change depending on the query. Would it depend on the document rather than the query? That's what I'm saying. It would change based on the query. So that's something that you often see, especially in a case where you have a very generic title as like home, for example, for a page, then you often see, depending on how you search, you see either home or you see kind of a rewritten version or you see a name of the company or something like that. So it would change based on the query, but it's not the case that we would change the titles based on what is shown as titles for the other pages in the search results page. Right, OK. So it wouldn't be used in Pavelka's collection of the top 10 or whatever, just to sort of switch things up and make sure there's a range of different types of results. No. OK. All right, John, regarding this, I have also seen one case where one page was ranking for three keywords. For two queries, the original title it was showing, but for third query, it was rewriting the title. So it is all the time it is not like very simple thing where we can go and just think like this title is not relevant. Sometimes it is like we have to go ahead with the same title. Yeah, yeah. I mean, it's tricky. We've also discussed internally options like having a meta tag saying, like, I know what I'm doing. Always show my title kind of thing. But that's also really tricky. And it makes things complex in various ways. We also had a project underway at some point where in Search Console, you could specify a title for a handful of pages and say, like, I always want this title to be shown. And there also we just noticed that there are lots of weird edge cases where showing a title algorithmically just makes it much easier for users to recognize that this is actually a good page to match what they were looking for. Maria, I think you had a question. Or you raised your hand, at least. You're muted. We can't hear you. We can't hear you, Maria. Sorry, I was muted. Sorry. Again, so yeah, I did have a question. However, when Don Anderson asked the question, it kind of clarified the question I had. However, going back again to Titles and Descriptions, sorry to ask, maybe this has been commented before. One thing we are seeing at work, as I said with small businesses and some clients they do, is they saturate because someone told them or something, they saturate their title or their description only with commercial keyboards. Because they said it works for such and such person. For example, you can see in some websites like Distin Brighton, Flower Delivery Brighton, Wedding Flowers President, and just all that in a description. Is still, why still is, I don't know if that is against practices, why does that keep happening? People do it because they say it works. Is that against practices? Is there a, I don't know. It doesn't look right, but when the clients comes to me and I see that the first thing I say is this has to change. It doesn't look right. But I don't know. I just want to hear your point of view. Why? Yeah. Yeah, so. If I make sense. Yeah, yeah. So it's not against our webmaster guidelines. So it's not something that we would say is problematic. I think at most it's something where you could improve things if you had kind of like a better fitting title because we understand the relevance a little bit better. And I suspect the biggest improvement with a title in that regard there is that if you can create a title that matches what the user is actually looking for, then it's a little bit easier for them to actually like click on the search result because they think, oh, this really matches what I was looking for. Whereas if you're looking for, I don't know, flower delivery, Brighton, and as a title in the search results, you see flowers, green flowers, yellow flowers, blue flowers, Brighton, and like all of the cities nearby, then you might look at that and say, well, is this like, I don't know, some SEO result? Or is this actually a business that will do a good job and create some nice flowers for me? So that's something where I almost think it's more a matter of improving the click-through rate rather than improving the ranking. And if with the same ranking, you get a higher click-through rate because people recognize your site as being more relevant, then that's kind of a good thing. Sorry, sorry, I just know because I saw a comment saying something about funeral flowers. And yeah, I thought, I don't know. Thank you, yeah, it makes sense. I mean, it's not a practice I follow. It's just I wanted the reason I'm asking this is because normally I always send the webmaster videos to my clients. And I say, watch for minutes, x to z. And this is a common practice I see with people that come from that they launch their own website and they try to give it a go to their own. They're learning SEO in one of the most common activities I've seen is saturated with keywords, titles, and descriptions. So OK. Yeah, I mean, it's really a common tactic because we say as well, we use the keywords in your titles as part of our ranking systems. And then people say, oh, then I need to add all keywords to my titles. And then you end up with something like that. But just because they are used for ranking doesn't mean that you need to put everything in there. And sometimes I suspect the bigger aspect is really the click-through rate from Search rather than the ranking effect, especially for small businesses. It's like you don't have a lot of chance to be visible in search results in lots of places because you're probably more focused on your regional area. And having a title that is really good that matches your business, that's a lot more important than having all of the keywords in there. Does it have the opposite effect, John? How do you mean opposite effect? Well, if you see, take the example that Maria has just provided of the florist, big florist selling flowers in every town or city in the UK or the US or wherever. And all of the page titles say, florist in Brighton, florist in Southsea, florist in Hampstead Heath, blah, blah, blah. Can that sometimes, because they're kind of almost competing against each other as clusters, can that almost have the opposite effect to them ranking well and detracts away from maybe just everything? Because they're all candid at Sullivan-Suddenfield with very, very tiny variations. Sometimes I think I see that sometimes on big sites. They actually cannibalize themselves. With different pages or with the same page? We're just competing with themselves with different pages, with many, many different pages, with small variations, with page titles being one of the quite aggressive ones that there's not a lot of difference on, if that makes sense. Yeah, that's definitely something that kind of happens there. And I think, I mean, on the one hand, I understand the interest in having pages for individual topics, but if the variations are so small, you're essentially just diluting the value of your content across those pages. And then you have a lot of pages for individual topics, but they all don't rank well, versus you have one page for more, slightly more general topic, and it is very well concentrated there. Then it's a lot easier for us to say, well, this is really competitive page for that topic, and we should show it more. So it's not to build fewer pages, but better. Yeah, exactly. Are they really competing against each other, those pages? So if, if Flowers is one of the typical examples where you genuinely do need a geographic page because they might have a genuine network, whereas a lot of these pages plumbers in and then name every single area within 200 miles, they're not, they're just doorway pages, but florists have kind of the longest established e-commerce network anyway of that distribution method. And I think that goes back to what you were saying, that those pages aren't competing with each other and they're not competing for the main homepage title and description. They should be genuinely standing alone. And florists have, if I remember correctly, they have the highest click-through rate of any e-commerce business because no one searches for it unless they're buying it. Does it depend on genuine network availability then? I think it's the point that Rob's making. I think it's probably one of those things where it depends as much as nobody wants to hear that, where you kind of have to figure out or for your specific use case is a critical that you have them kind of separated out like this. Do you actually have unique value that you're providing there? Or is it something that you can combine a little bit more? I've seen this happen with, for example, if you're a hotel site, people search for hotels in Zurich, but they might search for accommodation in Zurich, travel packages and everything like that. So I've seen sites kind of do individual pages for each of these, just so they don't stuff the title with all of these keywords. But I think and what I've been using as a, I don't know how good of a practice it is to kind of alternate some of the keywords in the title with some of the keywords in the heading. And I've noticed that Google sometimes, if you're using hotels in the title and accommodation in the heading and people are searching for accommodation, Google will sometimes just use the heading instead of the title in the search results, if that makes sense. So can that be a kind of a good practice to avoid kind of creating multiple pages for what it basically is the same user intent? Yeah, I mean, the other thing to keep in mind is that as kind of the whole machine learning side expands and it's like, it has expanded for a long time now, it's something where we understand a lot better what these words actually mean. So especially synonyms, like you mentioned, accommodations and hotels, like we kind of know that these are the same things. And you don't need to mention kind of those different words on the same page because we understand it's the same thing. And I imagine for a lot of the city names, it's very similar where we understand, well, these cities are right next to each other. So if someone is searching for one city, we could show content from the other one. Whether or not the user would click through and say, well, actually I really need like a florist in my city or maybe this city is across the border and it's like right next to me, but I can't easily get there for whatever reasons. So it's kind of tricky there, but for at least for the synonyms side of things, I think we are a lot better there. Yeah. Hi, John, I have a question. Sure. With adult websites, it's very unclear which sort of schema is basically allowed there because in some articles you mentioned that this schema is definitely not allowed for adult websites, but on the majority of them it's just not mentioned. So how can an adult website do this? Should it just implement everything and hope for the best or how can we actually know which one is supported and which one not? For example, the live one. Live is used a lot on blogs nowadays on some podcasts, but adult websites cannot use live, but it's not written there that it's not supported. So how can we know? I think in our rich results guidelines, we say like none of them are useful for adult websites, but I haven't checked recently. I don't know if anything has changed there, but at least as far as I know, all of the types of rich results that I'm aware of are explicitly not meant for adult content websites. And I imagine, so I don't think there's any kind of manual action or web spam action that takes place in something like that, but it's more that our systems recognize, oh, this is an adult website and it wants to show these rich results types, but since it's an adult website, we just won't show them. So it's not like it'll be loaded or anything. Then I have a second question. Let's say that a company has an adult business and all these scheme are not supported. They launched a second business, which features, let's just say, no nudity, but schemas are also not picked up there because Google still thinks that's an adult website. So what makes a website adult is my question. What makes a website adult? I don't know. So it's... There's one of those, if you have to ask, you probably already know the answer. Yeah, I mean, there's, let's say a website such as Twitch. I mean, Twitch has all of these things supported. Let's say that we make websites like Twitch. It's still not gonna get picked up, and I wonder why. Yeah, so the thing is, I think with a lot of the safe search filters is we try to apply them to a kind of a broader URL pattern on a website. So if we see that the whole domain is adult content for the most part, and you have some small part that is not adult, then probably we would filter the whole domain because it's just... Okay, yeah. I mean, our systems are just kind of like we want to stay on the safe side there. If you have individual subdomains where some of them are adult and some of them are not, that makes it a little bit easier. If you have separate domains, then that obviously makes it a lot easier for us to understand that these are completely separate websites that should be treated differently. So it's something we also see, I think, sometimes with... My safe search done is adult pretty much. Sorry? If it's filtered by safe search, then it's adult pretty much. Yeah, yeah. I mean, I don't think we make any clear distinction between filtered by safe search and adult and porn or whatever the different categories are there. It's mostly like we recognize it's filtered by safe search and then probably we wouldn't show any rich results there. It also happens the other way around where some sites might have, I don't know, classified sections which are for adults, and then if that section is embedded within the main website in a way that is hard to separate out, then we might say, well, we don't know how much of this site should be filtered by safe search. Maybe we'll filter too much, maybe we won't filter enough. On the other hand, if you move that to a subdomain, then it's a lot easier to say, oh, this subdomain should be treated like this. The other subdomain should be treated differently. Okay, understood. Does Webmaster Tools tell you if your site is held under safe search? Does it? No. Yeah, I think it's one of those things. Probably you know if your site is really adult content, but if it's kind of a tricky thing, then I've also seen that happen that they go to the forum. My site is not showing up in search and then someone will say, oh, well, if you turn off the safe search filter, it is showing in search and then they try to figure out how did safe search get confused or why is safe search so picky in this particular situation. Because it's one of those things where different cultures have different thresholds almost of what they would consider to be problematic or adult content. And we, in our algorithms, we have to make some call out somewhere. I don't know whether you saw that there was a case, well, it was not really a case study. Fabio from Transferwise did a little bit of an investigation into why something just suddenly just tanks. And he found adding excletives seems to have a bad impact. Is that the case, John? Because sometimes it's quite difficult communities, etc. And he found, I don't know whether it was just correlation versus causation, but when lots and lots of swear words got added to some content, it seemed to have a very adverse impact on the rankings. Is that something that might happen or is that just correlation versus causation? Yeah, I don't think that's something that would be by design. It's more of a side-effect kind of thing. Also, if it's any site with a forum or comment section, I mean, literally every comment section in the world would... I know, exactly. Yeah. He just seemed to think that that was something that had impacted and I don't know. I don't know. I didn't see that. Sounds interesting though. But I don't think that would be by design. Just adding some curse words to a page shouldn't make that page less relevant suddenly. Okay. Cool. Okay. I have to jump off to another meeting soon. Maybe I'll pause here. I have another Hangout lined up for next Friday, so it's not the last one this year. And if any of you want to join then feel free to jump on in. I think it'll probably be a little bit looser because it's like last Friday before the holidays. We'll see. But thank you all for joining in. It's been really interesting. Lots of good questions, lots of feedback also to pass on to your teams. I hope you all found this useful and maybe we'll see each other again in one of the next Hangouts. Thanks everyone. Bye. Bye, John. Bye.