 All right. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these office hour hangouts where people can join in and ask any questions around their website and Google Search. Bunch of questions were submitted already, so lots of things lined up. But as always, if any of you want to get started with the first question, feel free to jump on in now. Hi, John. I have a question about site architecture. OK. I can go ahead. I'll try to explain clearly. If I get confusing, please let me know. So we have a site in the personal injury legal niche in the USA. And we structure it in a topical silo structure. So we'll have categories of main topics like car accidents and slip and falls and medical malpractice. And then below each one of those, we'll have related subtopics. So for instance, car accidents will have rear-end accidents and truck accidents and motorcycle accidents. And those will be in-depth research articles, 2000 words plus. And then below those, we have question and answers from visitors. Or we link to individual pages. And those are all well-moderated 3, 400-word pages where we have an attorney give an in-depth personal answer. So they couldn't really be combined into one large page of Q&A. And we're wondering if it would be better to move those pages from their location, their link to from the bottom of those topical pages, into a separate Q&A section. Because right now, there is no separate section for question and answers. It's just in those topical sections and at the bottom of those pages. And I guess the broader question would be, is it better to have long, in-depth articles that are very high quality in one section by themselves and not mix in any other kind of lower quality page? I don't want to say they're low-quality Q&A. But just by nature, they're going to be shorter. They're going to be more specific. They're not going to get as much engagement, that kind of thing. So would it be better to have a separate section for those and put them over there and then have the high quality stuff all in one section within that topic? I think you could go either way. So that's something what I would recommend doing there is just trying it out, investing it and seeing what works best on your side. So with the Q&A content, I think that's something that might be worth trying out to see what you can do with that kind of content. If that's something that could be shown in featured snippets, for example, that might be helpful. That might also be something where you might say, well, I want to avoid having this content directly visible in Search because I prefer people come to my site to get that information. It depends a little bit on what your goal is with that Q&A content. It's more a matter of getting the information out there and being seen kind of as a resource that has a lot of smart information, or if your goal is more to get people to your site, because you know on your site you can convert them best. So that's something kind of worth thinking about what your goals are with this content. What would you like to have happen with that? OK, but there's nothing where, in terms of mixing those high quality pages with the Q&A in one section and linking to each other within that area, that wouldn't be seen as kind of lowering the overall quality of the whole section of the site. I don't think so. OK. I think that's something which is essentially a strategic decision on your site. OK, thank you. All right. More questions from any of you all. I got some questions. All right. You can see what we can try, what we can manage. I was wondering if there's any advice outside of the advice already given to those who were impacted by the August 1st-ish Broadcore update? Is anything more you want to tell anybody about that that hasn't been communicated outside of just keep working on your site and making it better? Not really. So that's something we've looked at together for a while to see what we can tell people about that. But essentially, these are algorithm changes as they can happen anytime. And it's not a matter of us saying, like, the sites are bad suddenly and they need to change, but rather things have evolved. Like, things change overall. And then that can be reflected in the search results as well. OK, thank you. My next question, I think my last question, is the new Google Search Console, the link report in the help document specifically says that the report is more accurate, specifically because it is more accurate. And because of that, the links might be lower in the new report versus the old report, the less links, I guess. I think it says that. It's calculated slightly differently. So in our testing, we noticed some bigger differences, and we tried to get that as similar as possible. But it's calculated slightly differently. So it would be normal to see some amount of difference between those two reports. Yeah, so the help document specifically says lower, but that the drop in links is not necessarily a reflection of your website, et cetera. But you could say it could be either way. It can go up or down, you're saying. Yeah. What has changed, like, to make it why is it more accurate, specifically? That's understanding. Why is it more accurate? I don't know what we'd be able to say there directly. So it's something where we're getting this data directly from the search pipelines and trying to show that as cleanly as possible. So it's slightly different from the old setup that we had. But why is it more accurate? I don't know. Good question. I'd have to figure out, check with the team. OK. I don't think anybody sees the new report yet. I guess we'll be rolling out next so often or so very soon. It should be live for some people already. So kind of how we roll these out is that we start off, I think, with 1% and show it to those people and kind of incrementally ramp that up. And usually over a period of a day or so, everyone should have access to that. Check five minutes ago, it wasn't there, and now it's there for me. OK, so now you have a chance to analyze the link difference and see which one is better. Let us know. Cool. All right. Any other questions before we get started with the submitted ones? No? OK, that's fine too. If anything comes up, feel free to jump on in. And as always, if you have any questions to the answers or to the questions, feel free to jump on in as well. Google Search Console shows incoming links. Is there enough to disavow the bad ones that are shown in order to get rid of the bad links? So the Disavow Backlinks tool in Search Console is a way of telling us to ignore specific links to your site. Usually, that's something that you can do in situations where you know that maybe a previous SEO or, I don't know, in a time when you accidentally did something like this, you went out and did a lot of weird things with the links to your sites. And you can't clean them up completely, but you'd like to make sure that Google's algorithms don't take those into account. So that's one place you can do that. Another time might be if you see something where you're worried about negative SEO, that maybe a competitor is linking to your site in a way that you feel could be problematic. For the most part, we take care of that algorithmically on our side. But if you're unsure and you're just like losing sleep over these links to your site that you don't want Google to take into account, you can disavow them. So those are essentially the main reasons to use this tool. So you can use the Disavow Links to let us know about these bad links that you don't want to take into account with regards to, is it enough, or do you need to do it? For the most part, I would do this if you're aware of a significant issue on your site. That could be because you know that someone did something really crazy a while back. It could be because you're just worried about something negative SEO. It could also be because you have a manual action based on links already, where maybe someone from the web spam team has taken a look and said, well, actually you've been doing some sneaky stuff and you need to clean that thing up. So those are kind of the situations there with regards to, is it enough to disavow just the ones shown in Search Console or not? In general, those are the ones that I would focus on. Sometimes if you're aware of a link building that was done in a significant way, then the Search Console links can give you an idea of the general pattern. And then it's still worth looking to see if there are more links that you can also include there. But for the most part, the data that's shown in Search Console is really the links that you should be focusing on. And this should be pretty similar across the new and the old Search Console. We notice our website's structured data reviews and offers is no longer visible in organic search results. We rarely see any reviews in organic search results. Are reviews and offers being removed from organic search results? No. Reviews and kind of the ratings and offers are still shown in organic search results. As far as I know, I'm not aware of any plans to remove them. However, we do have several things that we try to focus on when it comes to showing these kind of rich results in the search results. On the one hand, we require that the markup is technically implemented correctly. We require that it matches the guidelines that we have, so kind of logically implemented in a correct way. And thirdly, we require that the site is of reasonable quality. And this is something that can change over time, that our algorithms look at a site and say, well, actually, this is pretty good. And then maybe the next time they look at the site, they say, well, maybe it's not as good as we thought initially. So it can happen that we show your reviews or your rich results at one point, and then at a point later on, maybe we don't show it. That's completely normal. Also, it's normal that we don't always show rich results in the search results. For example, you could imagine a situation where all pages in a search results page have all kinds of rich results. And that's kind of, I don't know, too much from a visual point of view in a search results page. So sometimes we have to pick and choose and say, well, in this case, we'll show reviews here. We'll show a video snippet for this one. Maybe we'll show some rich card for one of the other results there. But we won't show all of the combinations that are actually marked up on the page. And that's also completely normal. The question goes on. After a recent Google search algorithm update, we've seen dramatic negative impact in our average position for our most common keywords. Our site has not changed. Can you describe what the changes were that might be affecting the SEO? This is something kind of, as we mentioned before, these are general algorithm updates that we make from time to time with regards to search. And it's not that your site is worse or different in any way. It's essentially just our algorithms are re-evaluating. What we think makes sense, what we think is relevant for search results for specific queries and users. And this can and should change over time. That's completely normal. One example you might use is if you have a query like best movies, then that's something where, obviously, if you looked at it last year, you'll have one set of movies. And maybe if you look at it this year, you'd have a different set of movies. And it's not that the old movies are worse in any way. It's just like, well, things have changed. And that's how things are evolving in general overall. We have a site that is targeted towards European users. And we're in the midst of setting up two other sites that are mainly for non-European users. So for Africa and the rest of the world, the content is the same as the service we're providing is very similar. What's the best way to go about this? Hreflang? Will we get penalized for duplicate content? These sites will be on their own CCTLDs. So maybe first off, if you're looking at Africa and the rest of the world, there is no CCTLD that's available for a general region. CCTLD is a per-country top-level domain. So that's something where if you're setting up new domains for all African countries, that's quite different than setting up one new domain for all of Africa. So maybe that's just the word that you chose there. CCTLD versus just generic kind of domain name. That's one thing to keep in mind. What I would recommend doing there is using the Hreflang markup to let us know which version of your content is relevant to which users. So with the Hreflang markup, you can tell us this is actually the same content as this other thing that I have here, but this is for users in this country or this is in English for users in this country. And you can specify a whole list of countries there. You can say this is for English for all of these African countries, for example. And this is in English for all of these European countries, for example. And with this list of kind of equivalents where we see this is one version, this is the other version, we can swap out those URLs as we see them in the search results so that users get the version that best matches their site or their location. And there's no penalty for duplicate content with regards to something like this. We just pick one of these to show them. And with the Hreflang markup, we try to pick the right one. So that's something where probably that's a direction you'd be wanting to look into. Can I add on to that question? Sure. OK, so that is my question. And I want to add, how do I tell Google which one I want for countries that haven't been specified? So I have English UK, Canada, English US, let's say. But what if I want everything from English everywhere else to go to Canada, which by default, I don't think Google would pick Canada, right? How do I do that? You can, with the Hreflang annotations, you have three different levels that you can use. One is the country plus language. So that would be like Canada English. One is just the language. That could be English. And one is kind of the X default, which is everything else. So by default, we try to pick the most specific one that we have there. So if it's a user in Canada that's searching in English and we have a Canada English version, then we'll send them there. And if we don't have a Canada English version, but we have a generic English version, then for users in other countries that are searching in English, we'll send them to that generic English version. And finally, if we have someone that's searching in a different language and we have an X default version, we'll send them to the X default version. So that's kind of the order of precedence that we try to use there. What about with news content, with news articles, stock news, any type of news? How would that apply to this? Would it be better to have one central location with other news articles like news.domain.com and have all the sites linked to that? Or would it be better to have news.canews.com, news.edu, news.whatever else? That's a good question. I don't think we've actually talked about that in the past. So one of the things with hreflang annotations is we need to understand the different versions before we can actually use the annotations. So that means we have to crawl and reindex the individual versions at least twice, so first to see the initial annotation and then a second time to kind of confirm that. And that's something that we have to do across these different language versions. So if you have just like Canada and UK, for example, we'd have to crawl both of those versions and index them at least twice to use the hreflang annotation between them. So for news content that you need to have visible fairly quickly, that's probably a little bit inefficient. So for that, it would probably be more efficient to just have one place where you have your news content so that we can crawl that once. We can pick it up. We can show it as quickly as possible in the search results. And we don't have to worry about matching the hreflang annotations there. So that might be something where you could say, well, the kind of stable content, we use hreflang annotations. The home page, we use hreflang annotations. In the content, that's quickly changing. That's quickly coming into the index, quickly coming to your site that you want to have index quickly. You put that into maybe a news section that's shared across these different sites. Great. Thanks so much. That makes my life so much easier. Fantastic. Cool. All right. Now, more Search Console questions. Search Console informed me that we have server errors on Googlebot for smartphone. I checked the server logs, and there were 503 errors for planned maintenance downtime. Assuming this is the correct status code for planned downtime, should I mark these errors as fixed? Is it worth adding a retry after a header? Usually, the site is down maybe one hour every 45 weeks. So ideally, of course, you wouldn't have any downtime. In practice, sometimes you kind of have to take this into account, and maybe one hour every four to five weeks is not too crazy. So that's something we should be able to live with. Using a 503 status code is perfect for this. It essentially tells us this is temporarily unavailable. And for Google, it means that we don't change anything in our indexing for that time. We just retry it a little bit later. And if it works a little bit later, then that's perfectly fine. So it's not that these pages would drop out of our index because of a 503 error. It's not that they would rank lower because they're temporarily unavailable. It's essentially the correct status code here. The retry after header is always a good practice. I don't know how much we take that into account, because sometimes we see these types of headers in a very generic way. And we aren't always able to trust them completely. So if you say something like retry after two days and we retry that after a couple of hours, that might be completely normal and not a sign that we don't trust your website. But if you're talking about a one hour downtime, then that shouldn't be a problem either way. Regarding marking these as fixed in Search Console, using the mark as fixed feature in Search Console, especially in the older Search Console, is essentially just in the UI. So it doesn't change anything from the processing on our side. It's essentially just hiding them from you in Search Console. So it makes it look like this problem never occurred, but it doesn't actually change anything with the processing. So it's not something you need to do. It's something you can do if you want to have a list of tasks that you're working on. You can mark these as fixed and say, well, I cleaned this up, or I double-checked these, and these are all OK. And let us know about it like that, or rather tell us that you fixed this so that we don't show this error to you anymore. But it doesn't change anything on our side with the processing. So if you're looking at a downtime of an hour or so for your website and you serve the 503 properly, then I think that's perfectly fine. You don't need to do anything additionally then. If you're serving a 503 for a longer period of time, like several days in a row, then that's something where our systems might start to worry about your site and say, well, how temporary is this downtime, or is this website completely gone now? Should we start removing pages from Search because of that? So if at all possible, make sure that you really limit the downtime to something reasonable. And something like an hour is kind of reasonable. Of course, people expect that things are available all the time. But something like an hour, if you have a nice 503 page that tells the user that, hey, we're temporarily down because we're doing backups or because we're doing something, that's perfectly fine. Can we also use a 303 status code after moving from HTTP to HTTPS, or is only 301 recommended? We strongly recommend using a clean 301 redirect from on a per-url basis for HTTPS migrations. So you can use other types of redirects, but the 301 redirect is really the one that we watch out for. And if we can recognize that it's really a clean migration from HTTP to HTTPS, that all of the old URLs have moved to the new one, that you're not removing things, that you're not no indexing or robots text, disallowing pages differently on HTTPS. Then that makes it a lot easier for us to trust that as a kind of this one big thing of a site move that is moving from HTTP to HTTPS. So the clear you can tell us that this is really just a generic move, and we don't have to think about any of the details. The more likely we can just switch that over without you seeing any big change at all. So if you start using other HTTPS result codes for redirects, then that makes it such that we kind of have to reconsider and think, well, are they doing something unique here that's not just a generic site move? And then at that point, we have to reprocess really each URL individually and think, well, what is the webmaster trying to do here with this specific case? And that makes these moves take a lot longer and makes it a lot harder for us to just pass on all of the signals to the new version of the site. I have a website, and it's a YMYL website. And we're doing good, but fighting hard for getting ranking with my competitors. They have a good page rank, and their content is not as good as ours. What can I do to grow as fast as they are? I don't have any kind of magic bullets that you can take to any website regardless of its kind and just say, well, this is what you need to do to make your website better. It's really something where you have to be the expert of your niche, be the expert of the whole area that your website is active in, and you have to work out what it is that your users expect and how you can fulfill that, or how you can make it clear that you're focusing on something different. And there is no meta tag there. There is no magic use every keyword twice on a page type thing that will make it work better. And there are a few questions like this that were submitted with regards to individual sites. And pretty much for all of them, I don't have any kind of magic bullet that you can just apply and say, well, this makes your website so much better than all of your competitors. It's really something you have to work out on your own. And what sometimes helps is to get input from other webmasters who work on other websites. So going to a webmaster help forum can be really insightful to get feedback from other people who might look at your site and say, well, this is a fantastic site, and it should be ranking more. And I should recommend it to all my friends, or they might look at it and say, oh, this is so ugly. This is like from the 80s. And you should be doing something different where it's worth taking all of that feedback into account. And some of it you might not want to use, but it's worth collecting that so that you at least have some insight on how other people might be viewing your website. But there's no magic bullet. There is no simple meta tag that you can use to make your site rank above all of the other ones out there. And if there were a meta tag like that, then, of course, your competitors will be doing it as well. And then you're in the same situation again. Hi. John, I have a related question to that one. Sure. In terms of your money or your life sites, the recent quality radar guidelines that came out seem to focus a lot more on creator reputation and site reputation, particularly in the area of those types of sites. And with the recent update, it seems like a lot of sites in the medical field, your money or life areas that didn't have good reputation, got demoted. And my question is about, say, we have a site with multiple authors, and then we find out that one of the authors has a poor reputation or has developed a poor reputation. Is there a way to remove the author from the site but keep the content in that we have editors who review the content, make sure it's all good and it's still good quality? But because this negative reputation is attached to it, it will demote the quality of that content and then the site overall because it's part of the entire group of content. Is there any way to deal with that? Essentially, those are just normal site changes. So there is nothing special that you'd need to do from a Google point of view if you want to make these changes. Yeah, it's just removing the author from the site, any mention of the author from the site. And then would you also have to remove that content? Is there no way to salvage that content if it's still good content but because of the author reputation, it's been sullied? I don't know. So I wouldn't look at the quality writer guidelines as something that is like our algorithms are looking at this explicitly and checking out the reputation of all authors and then using that to rank your website. So that's something where I would see this as an organic change on your website where if you're saying, well, I don't want to be associated with this person anymore because they've turned out to be a terrible person. And all of the content that they've written, looking back, I don't even want to have on my website anymore. That's something, essentially, like an organic change on your website. And just over time, Google would pick up on that and reprocess everything. Yeah, I mean, that's essentially just normal content changes as they happen. And it's not anything where we would explicitly say, well, this piece was written by this author. And now that author is no longer mentioned on the list of authors of this website, therefore, they don't want to be associated with this person anymore. But the content is still here. And it's like working. I would look at it more as just organic changes as they would usually happen on a website and deal with it the way that you would deal with anything if search engines weren't around. Like, what would you do in a case like that? Is this so bad that you want to remove all of this content? Or is this just something where you say, well, I just don't want them to write new articles for me anymore? OK, thanks. Hi, John. I have a question for you related to the content quality how Google evaluates the content quality. Does Google use the transition words and the passive form of verbs when evaluates the quality of, let's say, text from the page or something like that? We already know that the position is not so important where it's positioned according to the information from a GERALIST tweet. But what about the transition words? Do you miss them or do you use them? I don't know. I don't know how we would analyze the full text there. It's something we try to use some of that to figure out the context and the elements on a page. And some of that we also use when it comes to ranking the page or someone is searching for something using these prepositions and conjunctions and what all they're called. So from my point of view, I would mostly focus on just writing naturally and not like putting a big blob of keywords that are disconnected from the rest of the content onto a page. So you mean a more natural language and more audience-centered content? What do users space to see? Yeah, exactly. Can I ask a follow-up question on author reputation? Sure. OK, so we have our own in-house writer. And we're looking to expand the unused content. And we're going to be bringing in outsourced writers. How do I know that the outsourced writers I bring in, especially now at this early stage, don't have a bad reputation where we'll be penalized for bringing somebody in who, unbeknownst to us, has a bad reputation as an author? How do I avoid that? How do I protect myself? Essentially, it's like anything that you would do on a website normally. So there is nothing that I'm aware of on our side that would be automatically researching authors to figure out what all they have written and what all they've done in the past. So that's something, at least from our side, I'm not aware of anything specific there. So if you're looking into adding new authors to your website, then I would evaluate them like you would anything else on your website, anything else that comes into mind with your business. If you have a graphic artist that's creating illustrations for your website, then I want to look into what they've done in the past. And does it match the style that you want to provide on your website, natural things? Specifically, what I'm referring to is we want to be credible. So we want to have our author displayed. We're going to link back to their profiles, whatever. So that everybody knows who this is. Now, say this author in the past wrote a different blog about a subject, I don't know, something else. And it was all keyword stuffing articles, blah, blah, blah. Now, Google knows that this happened. And I don't know if you guys associate this author with bad practices that he might have used in the past. If by me hiring an author that had done this in the past, does this bring the penalty on to my site now? And if that happens, especially at an early stage, I mean, that's going to be difficult for us to dig ourselves out of. So from our point of view, we wouldn't penalize a site like that. So we specifically talk about penalties in the web search when it's really something manual from the web spam team that looks at your website and compares it to the webmaster guidelines that we have and say, well, this is clearly wrong. And our algorithms could be confused by this. Therefore, we have to take manual action to tackle this issue. But with regards to authors, at least as far as I know, we don't have any systems that research the background of the authors and then treat that website as lower quality just because they've written some bad things in the past. That's something that personally, I would recommend looking at more from a general website point of view. Is this someone that you want to have associated with your website? Do you think the way that they write content matches the style that you want to provide on your website? Is that something that works well for your users as it come across as an authority in their field? Do they have a background? All of these things that you would just naturally do and not specifically change that in any way just because search algorithms are also involved. OK, so then my fears, they're nothing, right? I wouldn't worry about that, yeah. OK, thanks. May I ask a question? Sure. Hello. I posted the question already on the Google Plus page. It's related to Azure Server, Azure CDN. And the thing that happens, the problem that I'd like to confirm whether my fears are right or not. So physically, the website is located on my site, Azure websites. CDN has a different address. It generates the content from the physical location. And CDN also has its own address. And there is no redirect set. Is it serious? Can it bring some confusion for the bots when they crawl? Or it's OK. You know about this issue, and you don't take any penalties. Thank you. No, so we wouldn't use any penalties in a case like that. So I think the first step there, so it's not that you would necessarily see a big problem there. However, what can happen is if we see these different versions and we discover them somehow, and we can crawl those versions separately, then maybe we'll index those versions independently. So maybe once we have your domain name and then once we have it as a subdomain from your CDN. And that makes it so that you're kind of competing with yourself in that we don't know which of these pages to show. Maybe sometimes we'll show this one. Maybe sometimes we'll show the other one. And that's something that doesn't necessarily mean that your site will rank lower, but it means that we have to crawl more to see the same amount of content, like we have to crawl both of these versions. And it means that it's not clear which URL we will show in search, because it could be this one. It could be the other one. So what you can do there to tell us which one you really want to have indexed is, on the one hand, the redirects that you mentioned. If that's possible to set up, the other is a rel canonical tag on the pages to tell us this is the canonical URL that I want Google to use. And then what you can also do is put the URLs that you want to have used in a sitemap file to tell us that these are the ones that I really want Google to use. And finally, what you can also do is with the internal linking, link to the pages that you want to have used. Internal linking is sometimes a bit tricky, because sometimes it's helpful to have relative URLs for the internal links. And with relative URLs, then we kind of lose the host name. We use relative URLs in order to make it easier for these different addresses and not to confuse. That's why we use it. And I wonder where is the solution, the same one you've? I think that's fine. So if you have the rel canonical set up there, if you have the URLs in a sitemap file, that already gives us a lot of information. The problem is that any settings that were done to the website in Admin Panel is automatically applied to all these versions, because physically there is only one version. And since we use relative URLs, they are applied to all three websites. That's why I was. So you say it's not a big problem. It will not cause us lowering position, just competition. Yeah, yeah. And it's something, personally, I would try to clean that up, because that way you really control which URLs are shown in the index. And also by cleaning it up, you can move to a different CDN at some point later on as well, where if you have everything on the Azure CDN at the moment and you move to an Amazon CDN later on, then you don't have to worry about these Azure URLs redirecting to the Amazon version, because that's probably hard to set up. You can just focus on your domain. And your domain is logically hosted at Amazon. And we access your domain, we index your domain, and it doesn't matter where you actually host it. OK, thank you. OK. All right. Here's one kind of similar. Our mobile website sits on a subdomain, so m.dopt. Do we need to have a separate mobile website map listing all of the m.dopt URLs and submit that into Search Console? No, you don't. You can just submit the normal web URLs, the normal desktop URLs, and submit those in Search Console. And from there, we'll pick up the link to your mobile versions and use those only for the mobile search results. So if we see that your desktop pages have an associated mobile page, then we can swap that out and show that URL appropriately in the search results. I think in the long run, it's a good idea to try to aim for a responsive design where you have one URL that works for all of these different devices. That makes it a lot easier with regards to issues like this, because you don't have to worry about which URL is in the sitem file, which URL has internal links or structured data associated with it. All of that is a lot easier if you have just one URL for the content instead of multiple per device URLs. How does the word today work in the title? So if someone types query plus date and in my title I have today, does that automatically work there? I'm not aware of us doing anything specifically different for people adding today into the title of a page. I also don't know if that would even make sense, because if we index a page that says today and we index that once and then we recrawled it in two weeks, then that today in the title doesn't necessarily mean that it will have every day's content there. It might just be for that one day when we crawl and index it. So I don't think we'd do anything special there. Let's see. After the August core update, websites from the US that are not delivering to the UK are showing in the first results of many keywords in our industry, will this change anytime soon? I don't know with regards to will it change anytime soon? It's really useful to have sample queries for any types of questions like this. So if you can send us, you can send it to me, you can send it to Gary, or just submit feedback in the search results, queries where you see that we're obviously showing things wrong, then that's something we can take to the team and we can discuss with the team to see what the best approach there might be. And from time to time, we get comments like this saying, you're showing search results that are from a different country and they're not relevant here. And it's something that is sometimes tricky to balance out because sometimes content is internationally relevant, even though it's hosted or initially targeted for a different country. So that's something where if you can give us generic queries, kind of common queries, where the search results that we provide are obviously wrong, then that goes a really far away for us to pass that on to the engineering teams and say, hey, look at this obviously wrong thing that you're doing. We need to fix this. Whereas if the examples are like long quotes that nobody actually searches for and who are saying, well, on page five, there's this one UK site instead of the US site, then that's something where the engineering team will say, well, this will change next week anyway because what happens further down there, it can always change. But the more obvious these search results are wrong, the more likely the engineering teams will be able to take action on them. But we need to have those clear examples. How can I delete a wrong site link from Google search results? It shows site links of the deleted pages on the website. They're redirected to 4.10, but still appear as site links. So in general, site links are seen as normal search results. So we don't have any special tools anymore to kind of manage site links for your website. What you can do, however, is remove pages from the search results like any other page in the search results. That can be done by serving the proper result code, can be done by using no index tag, can be done with the URL removal tool, if it's something that you urgently need to have removed. In this case, you're saying that it's returning a 4.10 already. So 4.10 is a result code that tells us the page is gone. And it's still appearing in the site links. And so I see two general situations where this could happen. On the one hand, it might be that this page just recently changed into 410 result code. And it's just a matter of us recrawling and reprocessing that page and saying, oh, while this page is gone, we'll drop it from search. What might also be happening is that there's something blocking us from actually seeing that there's a 410 result code being returned here. And that could be because maybe something is redirecting to an error page on there. Or it could be because that page might be blocked by robots.txt so we can't actually crawl it. And for double checking those two scenarios, you can use something like the mobile friendly test or the fetches Google tool within Search Console. And double check to see that Google can actually see 410 result code for that URL. And if we can see that, then that should drop out on its own over time. And again, if you need it gone quickly, you can use the URL removal tool as well. If your site is mostly subpar content and we improve a couple of articles, will these articles still be able to improve their rankings or will the whole site drag those articles down? I think there are two aspects there. On the one hand, we do try to rank pages on their own, so pages as we find them. We try to rank them as we find them. And on the other hand, there's also an aspect there of us trying to understand the website on a whole. So as this group, this item that belongs together as a group of things, in particular when it comes to new pages being added, we try to figure out, what should we do here? Should we crawl these really quickly and get them indexed as quickly as possible? Or is this maybe not as critical that we don't actually need to index them that quickly? And from that point of view, I would look at both of those sides. And on the one hand, work on the website overall to improve its quality overall. And on the other hand, also make sure that the individual pages are good. So that's something where if you're aware of your website being mostly subpar, then that's obviously something you can start working on. And if you improve that incrementally, then over time, we will try to take that into account as well. But just improving one or two pages and leaving the rest of the website in a state where you're saying this is obviously bad, I don't know if that's really a good long-time strategy. What if a website has more pages that are not found than pages that are found? Does this have any impact on my ranking? We had some bug on our website, and suddenly we have a lot of crawl errors. As I understand this question, that's perfectly fine. It's completely normal for a website to have a lot of URLs that don't work. That's kind of the way that the web works. From our point of view, we will try these URLs that don't work. If we see that they don't work, we'll generally crawl them less frequently. But over time, we'll still retry them from every now and then. So you'll continue to see them as crawl errors in Search Console. But that doesn't necessarily mean that it's a bad thing, that it negatively affects the rest of your content. We focus on the content that we do find there on the website. We use that for indexing and ranking. We have a boxing statistics website with millions of boxer profiles and five pages. But most of our pages are listing factual data, like numbers and figurines. Most of our statistics are unique to us, but it seems that Google is seeing our content as low value or too similar to others due to us mostly showing numbers and not much text. What's the best way to improve our rankings when what we offer is factual and number-based content? So first off, factual number-based content is perfectly fine. That's something that has a place on the web, and that's something that makes a lot of sense if you have that data. On the other hand, it's sometimes hard for us to show information that we only have available as numbers instead of something textual. So we do use text as a basis to understand what a page is about. And that might make it a little bit tricky in your case. However, I assume most people are looking for the names of the boxers or the names of these matches, the names of these events that are taking place. And that's content that probably you already have in textual form on your pages. So from that point of view, I wouldn't artificially add text to these pages if you already have the critical content on these pages in text form. But maybe double-check that it's actually in text form and not as an image, for example. And past that, I would look at what your users are actually looking for and try to understand with your users what you can do to significantly improve the quality of the content of your website overall. And that could be by adding more text. That could be by doing other things. It could be that your website has a lot of value, but it's a very small audience that actually likes the type of content that you're providing there. And maybe it's normal that you don't have a ton of people coming to your website directly from search. So these are all things that you could look at and consider when it comes to taking next step with your website. Some time ago, I saw a website using following HTML code to redirect their old domain to a new domain. Would this pass page rank? Is this different from our 301? And the HTML that's shown is a meta-refresh-type redirect. We generally don't recommend using a meta-refresh-type redirect because it's something that can be a little bit tricky from an accessibility point of view because if you click the Back button and you land on that page again and then it redirects you again forward instead of taking you back to the original page. So that's something we don't recommend as a replacement for our 301 redirect. However, we do take this into account when it comes to understanding what content is on a page and which pages are redirected. And when we take that into account, we do forward the signals appropriately. So things like page rank, that's forwarded here as well. One way you can double-check this is if this is on your website or if these URLs are websites that you have verified, you can use the Inspect URL tool in Search Console for the old and the new URL. And double-check to make sure that the new URL is actually the canonical URL for the old URL. And if we've chosen the new URL as canonical for your old URL, then essentially those signals are forwarded to the new URL, to the canonical URL. If a site gets hacked, how long to recover rankings typically after the manual action is removed? That's really hard to say. It depends quite a bit on the type of hack that was happening there. It's something where if we can recognize that it's a clear hack or that there's clear sections of content on a website that are hacked, then we can kind of block that out and work to focus on the previous version of the site. However, if that hack is all over the website and it's really hard to isolate, then we essentially have to reprocess the website to understand it again and see what is this website really about? Is this about pharmaceuticals or is this about, I don't know, jogging shoes, for example? And that's something that takes a bit of time to happen. And there is no fixed timeframe where we can say after two weeks your website will be back to normal. It really depends on the hack itself. OK, can you hear me? Yes. OK, I just made it in. In the first case where you can clearly recognize that maybe they've added a bunch of content, the content's been removed. You set four 10 headers. The penalty has been removed. Just what's a typical kind of scenario in that case, I guess? Should we expect it's going to take as long to recover as it took to bring down, essentially? So if it's clearly a section of the website and you've removed that completely and you've kind of taken care of that fairly quickly, then I would expect that to settle back down within a couple of days. OK, fairly quickly. But if it's been a really long time there, and especially if it's spread out across a lot of different pages, so one example that we sometimes see is that hackers have hidden links on all pages on the website, then suddenly you have all of these pharmaceutical terms all over your website, then that's something that just takes a long time for us to actually reprocess and understand again what is this website really about. How should we be showing it in search? Right, waiting for a re-index and perhaps. OK, thank you. All right, I think we're time-wise towards the end. Maybe if there's one more question from any of you live here, I can jump in and grab that as well. What else is on the map? May it be me? Yes. All right, so we have a client here that is blocking mostly all of the US, Canada, European countries, basically the biggest countries in the world that we think that Googlebot could crawl our site. So in the end, we did that our site is not getting verified because whenever we get into Google Search Console and we try to verify it, we get server time out. And also, another problem is that we have two separate sites, but listed for the same domain. So it's like a demo domain and site and domain. So for some keyword searches, we get the demo site. And for others, we get the live site. And we even added to the demo one, disallow for and also no index no follow. Why is this happening? I think that's a critical issue because we probably can't crawl it properly. So I would strongly recommend figuring out what Googlebot is trying to crawl and to make sure that Googlebot can actually crawl from those locations, especially if you're blocking the US that might have a really strong impact there. All right, I need to head out. Thank you all for joining in. And I'll see you all next time. Bye, everyone. Bye, John. Bye.