 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst at Google in Zurich, usually. Now I'm at home, like I guess most of you. Part of what we do are these Office Hour Hangouts where folks can join in and ask any question around their website and web search. And we'll try to find some answers for you. A bunch of things were submitted already on YouTube. But if any of you want to get started with a question of your own or something you submitted there, feel free to jump on in. Hi, John. Hi. I have a few questions about the one for the website. So they are a gift basket website. They sell different type of gift templates. So last year, we faced an issue with the website. Our ranking was good for the Christmas hangouts which is few hours until October. But after October, we noticed that our ranking dropped a little bit. And then we checked our computer websites. Some of our computers, they add big content on their category page. And the content was if the content doesn't have the same information, it has information like the different policy and what type of Christmas gift can be issued by. So literally, it was a blog post they put on the category description. On the other hand, we put a tiered blog description like what is new gift templates we have and what is the last bit to order and what of what we have to provide a different information. So now, how do we decide which content is useful? Is there a new way? It's really hard to understand you from the audio. But I think what you're asking is you have category pages and there's a lot of extra content on those category pages and you're wondering kind of like, is that good or not? Is that roughly it? OK, so in general, having some amount of text on category pages is always useful because it helps us to understand those pages a little bit better. But the important part is also that we can still recognize that this is a category page of an e-commerce site and not a blog post or kind of like a Wikipedia article on the type of products that they sell. So with that in mind, I would try to put useful text that makes sense for a category page on the category page. And if you want to write background information, if you want to include the full copy of the terms of service and things like that, I would put that on a separate page. That makes two things a little bit easier for us. On the one hand, it's a lot easier for us to determine the intent of the page. So is this an informational page? When a user is looking for information, should we send them to this page? Or is this primarily a kind of a shopping page where someone is trying to buy products and we should send them to this page when they're trying to buy a product? So that's something where you need to make up your mind what kind of page you want it to be because it's not that you can just do all of these things on one page and we will think it's perfect for all kinds of needs. But rather, what we will do is look at this page and say, well, we don't know what the best use of this page is because there are so many different aspects here, we have to guess. So the clearer you can really make a category page be what you want that page to be, what kind of address the needs of the users that you think they will have or that you want to cover, then the clearer we can follow that and say, oh, this is a really strong category page for people within shopping intent. We should send them to this page when they want to buy a product. Thank you, John. Sure. Yeah, Robin? John, can I ask a quick question? Sure. So John, actually I'm working on a B2B client and he's into the business of sales intelligence tools. So my question is, in such a tight B2B space, is it good to do FAQ schema implementation on their FAQ pages? Because big guns like Salesforce and Zoho, even they are not applying or making use of that schema. Well, I think if other people aren't using it, that gives you more opportunity. So I wouldn't say that's a bad thing if the competitors aren't using kind of the new features yet. But I would think about what kind of content you want to provide there in the FAQ sections to make sure it's really frequently asked questions and that it's specific to the individual pages, not that you just copy all of the questions to all of the pages on your website. But I wouldn't assume that just because a big website is doing something or not doing something that they're doing that on purpose, sometimes they just don't have time to fix things or change things. Yeah. Well, thanks John. Stay safe. Sure, you too. All right, go for it Robin. Hi John. First, I'd like to introduce myself. So my name is Robin Lispanyard. I'm the founder and CEO of Selectus, product review website in French and proud employer of 12 full-time employees. So I'm gonna read out my question really out loud, I guess. Hi John, we sent a reconsideration request for our website on the 16th of March. Should we be expecting a longer review period than the usual a few days to two weeks due to the situation right now with coronavirus? And also if I may ask, does Google sometimes send a warning before applying a manual action? Because in the case of our websites, we instantly let lost 90% of our traffic and revenue putting our company and its employees at risk of bankruptcy. We realized we made a mistake which we quickly corrected after receiving penalty but a simple warning with something like 15 or 30 days notice would have done the job too. Can I have a look at it? Hold on, hold on, hold on, hold on, hold on. Bye Amito, bye. Okay. So I took a quick look at the site just beforehand and usually with regards to reconsideration requests, it does take a lot longer than just a couple of days, especially the linking reconsideration requests I've seen they sometimes do take like up to a month or so. So that's something where at least at the moment they're quite overloaded with regards to these kinds of requests. I can double check with the team to see if there's something that they can do here. If this is something where there's like a clear case where they can just say, oh, we don't need to spend a lot of time on this. This is actually okay now. With regards to sending a warning, that's something we do discuss from time to time and I don't know if that's something that will change. The primary reason why we tend not to send a warning is that when we run into this kind of a situation where the web spam team notices that there's a problem, then usually that's something that's affecting the normal search results already. So that's something where users are seeing bad results and we know that users are seeing bad results. Then that's something where we want to take action as quickly as possible and to resolve that. So for the most part, that's always kind of tricky to say we would send a warning when we know that it's currently in a bad state. One other thing, maybe especially with regards to linking reconsideration requests, I think that's the issue that you had there. One of the tricky parts there is that sometimes a site's ranking is partially due to some of these link schemes that they were doing. So by fixing those link schemes and removing them or disavowing them, then you're removing some of that kind of artificial support that was for that website as well. So it won't necessarily be the case that when the manual action is resolved, that it'll pop back up right at the same place as it was before. Just because you've kind of removed that artificial aspect, then you're in the natural place again, which might be a little bit lower than before. It depends a little bit on how the algorithms looked at that in the past. But I'll definitely take a look with the web spam team to see if there is something that we can do there. It's kind of tricky because I know they also have a lot of work to do. And if I send them everyone that contacts me directly, then suddenly I'm just the person that slows everything down rather than kind of the process. Thank you. I thought, could you maybe explain to people here, the two SEOs, the process, the whole process from start to finish of a manual penalty? Because when I checked, we looked at all the information on Google Webmasters thing. And most of it dates of the days of matcats. It seems like there's no new updates. Has the process evolved in some ways? The manual review process is essentially the same because people have to take a look at it manually. The guidelines have been fairly stable for quite some time now as well. I think what will be happening a little bit more over time is that we can automate more of this. We already do a lot of this with some of our linking algorithms where we algorithmically determined these are unnatural links and we just ignore them. So it's not the case that someone manually has to look at all of the links and say, oh, these are bad. These are OK. But rather that we do that automatically. And I think that's the direction we'll probably continue to go, that it's less and less people having to manually take action and then you have to fix something and send a letter back saying I fixed it and then someone looks at it again, kind of thing, but more that we can automatically recognize problems, ignore them ideally. If we can ignore them, then on the one hand, it makes it harder for you because you don't know what is happening. But on the other hand, it also doesn't harm your site. So if you accidentally do kind of these things or if you did these things in the past based on bad advice maybe, then if we ignore them, then you're not causing your site any harm. But you're also not having any profit from those unnatural things. And I'm curious, does the, for example, an or a case, does the manual reviewer checks the website out, looks at the company maybe behind the website? Sometimes. Yeah, sometimes. OK. I mean, it's something where because these are manual reviews, we can spend a little bit more time, but it's also the web is big and we have a limited time. So what usually happens is someone will look at the bigger picture of the website and try to determine if there's really a strong pattern of unnatural issues here, which could be all kinds of things. With links, it's always a bit tricky. That's something where the manual reviewer has to double check a little bit more and see, is this really something that probably the website did by themselves, or is this something maybe a competitor did, or someone is trying to harm them and frame them by doing these links? And if we can recognize that it's not in their control, we'll try to find a way to just ignore those links. But they do spend a little bit more time to look at the website as well. And do you have multiple teams in multiple languages or just one big team in the US? We do have multiple people in multiple locations for multiple languages, just because sometimes websites are hard to understand in different languages. And just looking at the number of links makes it really hard to determine is this unnatural or is this just maybe one language that they don't speak and all of the links look like this because that's a very common word, I don't know. So that's something where we do have multiple teams in different locations. On the one hand, that makes it a little bit fairer with regards to international websites, but that also means for some languages and some locations, we might not have as many people as for other locations. So that sometimes slows things down with the reconsideration request. And do you have some kind of minimal reasonable time that you expect the website owner to take before sending his request? No, no, that's something where if we send a manual action and you fix it within half an hour, that's fine. So that was a myth. Many people told me, no, no, take your time, take your time. No, I think especially with links, it's more important that you really clean up the bigger picture. And sometimes that does take a bit more time, but there are different issues which you can just fix. If you have a plug-in on your website that causes the problem, then you can just remove that plug-in and suddenly everything is fixed. And if you want to stop it, please stop me, but I have some more questions. Once you receive the request, are there different priority cues? No, I don't think so. OK. And how does the team judge that a website has done a sufficiently good job at fixing the problem? Because maybe sometimes the problem is fixed like 90%, but you judge it to be OK, good enough. Yeah, especially with links, that's really tricky because you can always find something bad on the internet if you look far enough. So that is something where the team tries to see if a significant amount of the issue was resolved. And if that's OK, then that's usually fine. It's a bit tricky, like I mentioned with links. And different people might look at it in different ways. So what I have seen happen is that someone reviews a reconsideration request and says, oh, it's not enough. And then the webmaster sends a message back and says everything I could find. And then someone else takes a look and says, oh, yeah, it's actually OK. OK. But I mean, that shouldn't be the usual case, but that's kind of something that is possible whenever there's kind of a manual review process. And if the problem has not been fixed sufficiently, does the team voluntarily wait a longer time before taking another look at the website? No, no. OK, so that's another myth, too. That's another myth. With this thing today. Yeah, the one thing that can happen is if we see that a website goes back and forth, then that's something where the webstand team will say, OK, you're just wasting our time. If you fix the problem, do the reconsideration request, and then a couple of weeks later you have the same problem again, then a little bit of back and forth, and the webstand team will say, OK, we will take a look in a couple of months when you've decided what you want to do. Last question, Landon. After such a review, is the website put in a check regularly to see if no further problem happens? List? No, no. All right. If it's fixed, it's fixed. There's nothing that hangs around. Well, thank you very much. I hope this is useful to other SEOs. We busted a few myths here. Thank you very much. No, thanks. I think it's always frustrating and confusing the first time you have some manual action like this. So I totally understand all these questions. OK, let me run through some of the questions that were submitted so that we can try to catch up with some of that, too. Is there any image ranking advantage in using the same image in different blog posts when talking about the same topic? For example, every time I post about Parmesan cheese, I use the same image instead of a different image. Could it help that image to rank better? No, that doesn't change anything for the images because images don't rank on their own. They always rank together with their landing page. So it's always a matter of this image on that landing page, and the landing page has more information, more context around that image. So it's not something where the image alone would be ranking in the search results. Thank you, John. That was my question. Great, cool. How can we use the mobile-friendly test for web pages that are under development? It's an extremely useful tool, but it doesn't work when there's a robot's text disallow command. We're afraid of opening the dev part of the site because real Googlebot can jump into this whole tool. So there's a guide in inter-developer documentation on how to set up a proxy for debugging local, locally hosted, or firewalled pages. It's on a page called Debug Your Pages, I guess that's a good title, which is mostly about JavaScript sites. And essentially what you do is you set up a local proxy that listens to a specific port, and then you assign a URL to that location. And that way, you can test essentially locally hosted pages and try them out there. So that should work for the mobile-friendly test. That should work for anything that you use the URL inspection tools for. One thing to keep in mind with the URL inspection tools is that you would need to add this host to your Search Console account first. So there's a little bit more back and forth involved there if you need to use the URL inspection tool, but mobile-friendly, rich results test, structured data tests, all of those should work right away. One of our clients creates paginated pages with similar or synonym names in H1 title internal linking. What's your opinion about this? Can this help to avoid keyword cannibalization and help us to rank for different keywords for the same category? Because theoretically and practically, it seems to be a great strategy, as I've seen that our page six of a category got good clicked because maybe nobody else is using that specific combination, and we have it on the H1. So in general, you can do whatever you want on your category pages, on your paginated pages. However, because of the way that paginated pages tend to be linked within a website, where you link to the first page of the paginated set, and then you link to the next page from there, and the third page from there, and you have this chain of linking, we kind of see the first page of a paginated set as being the most important one. And that's the one we tend to show in the search results when someone is looking for this category of products. So if you spend your time kind of trying to improve page six of a category pagination set, then probably that's a lot of time that's spent on things that are very, very little visibility in search. So my general recommendation there would be to say instead of trying to improve this kind of like far off and less relevant page within your website, try to focus on the more relevant pages on your website and make those a lot stronger. So instead of going off and doing these super long tail things where you get a little bit of traffic and you might be ranking well for those keywords, but if nobody is searching for them, that's not very useful. Instead of focusing on those really long tail things, focus a little bit more on making the core of your website much stronger. Because by making the core of your website stronger, everything around that will be a little bit stronger as well. So that's kind of the direction I would head. It's not that you can't do this. If you want to optimize page six of your category pages, go for it. My feeling is just that it's not the best use of your time. We've been experiencing slow average response time. Can this demote our keyword rankings? Initially, our pages crawled, and the crawl stats also dropped, but now they've more than doubled. So yeah, I think that's something where you need to take a look at what exactly where you're seeing this slow average response time. Because there are two places where speed plays a role, from our point of view. On the one hand, there is the practical aspect of when we crawl our web page, we need to be able to get that content as quickly as possible so that we can crawl as much as possible from your website. That's more kind of a practical thing for Googlebot. If your server is slow, we tend to slow down our crawling as well, and then we can't crawl as much. So if you have a lot of content and your server is slow, then probably we won't get through to everything. The other aspect is with regards to what a user would see when they go to the page. And that's less kind of from the practical point of view of individual files, but more from the point of view of when you load this page in a browser, how long does it take to actually display the full content there? So those are kind of the two aspects there. With regards to the crawl speed, the time that it takes for us to kind of fetch individual pages, that's something that's, like I said, purely from a crawl point of view a factor. If we can't crawl as much, we can't show it in search. When it comes to the speed that a user would see in a browser, kind of the rendering speed, that's something that can play a role in ranking. Because when we send people off to a website that's really slow, we feel we've done them a little bit of a disservice by sending them somewhere where their time is being wasted. So those are kind of the two aspects there. The crawl snaps in Search Console show the Googlebot side. And the speed report in Search Console, the page speed insights, Lighthouse tests, they tend to show the more user-facing side, which is also reflected in the ranking. Let's see, does Googlebot desktop and smartphone always have the text Google in it? Because many times, our Search Console crawl stats don't match up with our Kibana server log trend. And in our server logs, when I filled it for Google, I chose MediaParter, Google primarily, and Google Weblight. And some other user agents that look like Googlebot that don't contain the Google in them, they contain things like Chrome with a server number, or something like that. So we use the Googlebot infrastructure for a number of different crawlers, and that's what you're seeing in the crawl stats report. The primary other kind of non-search crawlers that you would see there are the Googlebots. What is it, the media? You mentioned it in the question. Media Partner Bot, which is, I believe, from AdSense. Then there's also the AdWords landing page check, I believe. And then there's the Merchant Center, kind of the product search shopping landing page test or whatever that's called. I don't know what exactly those user agents are called. We have them documented in the Help Center, so you can double check there. So it's not only Googlebot for Search, but also kind of also these other requests as well. There's also, I believe, the rendering side of things might also show up there. So when we render a page for search to determine if it has any JavaScript that we need to execute, then we also need to fetch all of those individual pages there, which would also go through the crawl stats report. One of the tricky things with the crawl stats report and the speed there in general is that, obviously, when we start crawling a lot more from your website, your web server might be a little bit slower. So we tend to balance that automatically in that if we recognize the server is getting slower, we'll try to reduce crawling a little bit. The other thing that plays into speed and the size as well is that if, for example, your website has a lot of really large content, like perhaps it has a ton of PDFs, which are really large, and we go off and try to crawl those, then it might look like the average page is much slower, or we're crawling a lot of data from your server, but actually it's just, well, you have a lot of really large files, so we ended up crawling those. My blog is just three months old. My posts would keep appearing in the first page, and then they suddenly disappear after one or two days. Why does that happen? I think that's always a little bit tricky in the beginning when we first see websites when they're fairly new, because we don't have a lot of signals for those websites, and we aren't really sure how we should show the content from those websites, where those websites are relevant. We don't have a lot of information from the rest of the web that tells us this is actually a really good website, and we should show it in search. And with that, our algorithms have to make some assumptions in the beginning, and they have to look at your pages and say, well, I don't really know a lot about this website, but this looks OK. Maybe I'll try to show it like this in the search results. And over time, that will settle down, where we have a lot more signals from the rest of the web, and we can be a little bit more deliberate in how we show things in search. So what you're seeing there is fairly common, especially for new websites, that maybe they're shown very visibly in the beginning, and then it drops off to a stable state. Sometimes it's also the case that the competition is really strong, and we tend not to show these new pages as visibly as we otherwise might. And over time, we'll see, actually, this is a really great site. We should show it even in the competitive search results a little bit more visibly. So my gut feeling would be to say the first six to nine months of a website, things are going to be fluctuating, and it's up to you to make sure that the picture that you're providing is really a good picture, where it's clear to us that actually this is a really strong website, and we should be showing it very visibly in search. So improving the quality of your website, also perhaps finding ways to promote your website independently of search so that users can still find your website. And if they really like it, they can link to it, and they can recommend it to their friends. You're kind of building up that traffic from both sides, not just throwing a website out there and hoping that Googlebot does some magic and sends you a lot of traffic. My website ranks for SEO company and location, keyword phrase from the last two years at the first position at that location, and then suddenly another website ranks for the same key phrase. And that website was not even a competitor. We analyzed the important points of that website. We concluded that the domain name is an exact key phrase, and backlinks are almost the same. We also have 70% content marketing, the same as we have. So my question is, how can we beat out that website, which is ranking now on the first position from a couple of days? We've been in this industry for the last 10 years. I don't know, it's really hard to say. It might be that they're doing some things really well. It might be that you're kind of not doing things as well. It's really hard to say. But in general, some websites go up, and some websites go down. So maybe this website was in one of the previous hangouts and learned a magic trick that helped them to make it so that Googlebot can find their website better. So there's no simple solution to say, I will always be ranking first. Especially when you're saying you've been in the industry for the last 10 years, that historical aspect of our website was always ranking number one. That's something that I would just assume is not going to be the case. Things will always change. So you need to always stay on top of things. And the way that the web evolves in your niche is a way that requires some changes on your website. And that's something you should think about doing. And always kind of keep testing, making sure that you're doing things that work well for users. With regards to kind of why a competitor might be able to do that, that's always tricky. I would not assume that this is only because the domain name has some keywords in it. We do use keywords in domain names and in URLs, but it's a really tiny factor. And it's really something that mostly makes sense when we have very little other information about our website. So that's something where I wouldn't assume it's impossible for you to kind of beat them again and get that number one ranking again. But it does require work. And it's not something where you can find that magical meta tag that you put on your pages and suddenly everything gets better. So yeah, if you're an SEO company trying to rank for these SEO company phrases, then you probably know all of the things that you could be focusing on, that you could be working on. And if you're unsure of what you could be doing with your website or with this specific situation, I would definitely recommend dropping by the Webmaster Help Forum. And kind of laying open what the problem is that you're looking at, the URLs that you're looking at, and asking for honest feedback from people. And if you're an SEO company, there will be some feedback in the forum saying, oh, you're an SEO company. You should know what you should be doing. But it's still important to get a lot of that feedback and to get a diverse set of opinions about this so that you can kind of filter out which of these aspects are things that you think really matter or that you think you'd really like to try out. Because some things you might say, I don't want to do that. Or that's not something I'm good at. Or I don't want to hire someone to do that at the moment. That's totally up to you. But having that set of ideas is an important first step. Are there any pros or cons for having multiple sitemap files without completely filling 50,000 URLs? We're planning to implement last-mod tags, but currently we have around 200 files in our sitemap index. And some sitemaps contain fewer than 100 URLs and none contain more than 20,000. Would Google be able to see last-modified URLs in all sitemap files? Because I'm seeing Google doesn't read all sitemap files every day. So yes, we don't crawl all sitemap files every day. That's correct, but you can have multiple sitemap files. That's perfectly fine. What we recommend doing when you update a sitemap file is to ping that sitemap file to us. So there is a mechanism. I believe that's documented in the Help Center, where you send essentially Google about a request saying, hey, I updated my sitemap file. Take a look. And then we'll go off and double check that sitemap file. So that's kind of what I would recommend doing there. Having fewer sitemap files makes this a little bit easier. So if you can fold things together, that's also fine. If you can fold things together in a way that you have the fresher content in a single sitemap file and kind of the older content that hasn't changed for a while and other sitemap files, that also makes it a little bit easier because we can focus on that sitemap file for the newer URLs, for example. But those are small optimizations, I'd say, compared to just generally using the last modification date because last modification date does help us quite a bit. Can enhance my warnings? Hi. Can I ask a question? Hey, that's all over here. I'm working on a site that has been impacted by a traffic drop, organic traffic drop. We analyzed the whole website. We've been analyzing the website from the last four months. We saw there is no technical issue. There is no content issue. The site loads pretty fast. I think it's a fairly good site. Everything checks out. But we are thinking about this theory about bad neighborhood in the IP address. So is there a chance that this site being posted on Cloudflare and sharing the IP with too many really, really sketchy websites? Is that a reason for me to penalize, quote unquote? No, usually not. So it's really common that websites share the same IP. I'm sorry. The thing is that usually what you just said, we are trying to think outside the box because we are even asking our SEO colleagues to say, hey, what's going on here? We can't figure this out. There is no actual coincidence with Google updates. We are going updates. It's like we are turning our heads down. Yeah. So the usually is something where there are extreme cases where when the web spam team takes a look at a website and they see that, oh, on the server, there are 9,000 other sites and they're all spammy, then that's something where the web spam team might say, OK, all of these websites are spammy. And if there are one or two reasonable websites in there, that's something where that could happen that they are affected by this. With regards to everything else, like normal hosting, if you're hosted with a shared hosting provider, if you're using a CDN, there's such a broad mix of websites there that even when someone manually looks at it and sees, oh, there are 1,000 spammy websites here, it's very obvious that there are also 2,000 good websites there. So just because there is a significant set of sites that are spammy on the same IP address wouldn't result in the whole IP address being demoted. So it's really only kind of a really extreme case where maybe a spammer is running a server and they have tens of thousands of websites on that server and they're also renting it out to other people to make it look a little bit more legitimate. That's the case where maybe it could happen that we kind of miss something. If that was the case, should we have a notification in Google Search Console? You would. Yeah. You agree. Yeah. That's something where we would do that as a manual action. And with the manual action, that would result in a notification in Search Console. And then you can go through the reconsideration and say, hey, my website is not pure spam. It's like, what are you thinking? Look at my website. It's normal. But yeah. Also, thank you. I have another two questions, but I want to open if anyone wants to ask a question. OK. Let me run through the questions over the whole bunch. But yeah, let's see. How much time do we have? We have 15 minutes. Well, we can shift to open questions and come back to the submitted ones if we run out. Go for it. Anyone wants to ask a question? Yeah. Hey, John. I have a question. Sure. So I work on a pretty massive user-generated content site and we're mostly focused in the US. But we've seen great growth in our international usage and international content. This includes content that comes from other regions that may be in English but also is in their native language. And we've noted that although this content has been driving a significant amount of traffic, it doesn't seem to perform as well from standard engagement metrics, bounce rates, time on sites, conversion rates. It doesn't seem like this content is resonating with users as well as the US content. And we're fearful that the international content that we're getting may be dragging down our ability to perform when it comes to the US content. So we're wondering if it would be a valid approach to split the site into separate domain or perhaps a subdomain so that way we could do a better job of protecting our domain level kind of quality. I mean, you can definitely split things up into separate domains or subdomains. I think overall, assuming this international content is of reasonable quality, probably you would be better off just keeping everything together. If you're really not sure of the quality of the international content, then that's something I would try to figure out first rather than just looking at the pure metrics of things like bounce rate and interactions on the website, time on site, type of things. Maybe find some local speakers to double check the quality of that content, especially if it's user generated content, just to make sure that it matches what you would like to see on your website, what you'd like to have published under your website's name. But in general, while you can split things off and make separate sites, I think that makes it overall almost a little bit harder for the whole website because suddenly you have two websites that you have to work on and promote and make sure that they're performing well in search rather than one really big website that has kind of this mix of different kinds of content. And do you think that the content in the native languages could potentially be, let's say, it's all within the same topic? Do you think the aspect of language difference could be diluting the relevance for the US or English audience? I don't think so. I mean, what I would double check is to make sure that this content ranks in those languages and not in English. But in general, that should be fine. OK, thanks. I appreciate it. Sure. So I'm a little insistent. Thank you. Continue with this idea of bad neighborhood. We have a theory about bad neighborhood traffic. So imagine you have a news website but you are getting a lot of traffic from a video that ranks for a porn search. So it's completely unrelated and a completely different topic. So is there any chance that that traffic coming from a completely unrelated search term is hurting the main site or the other way around if we know index that page and remove that traffic? Good benefit us. Does that make sense, the question there? Yeah. I, in general, that wouldn't change anything. The tricky aspect there is kind of the adult versus non-adult angle. Like when you say it's ranking for porn terms, that's something where I would just make sure that overall your website is kind of in this clear situation where it's not an adult website. If you have some pieces of content that are more adult-oriented, that's fine. But kind of make sure that you stay clearly on the side where, I don't know, like the primary part of your website is clearly not adult. Because what could happen otherwise is that the safe search algorithms say, oh, we think maybe this website is an adult website and we will use safe search to filter the whole website. So with a normal news website, that's something you generally want to avoid. But if these are individual pages, that's usually fine. It's something where the biggest effect that I've seen is that it makes it really hard to look at the metrics overall. So we have this, for example, with our help forums. Every now and then I take a look at our analytics for our help forums. And the overall traffic is going up and going down. And we haven't changed anything with the forums. They're not hosted in a great way from an SEO point of view. But we don't change a lot of things. But when you look into the details, it's clear that sometimes we're just ranking for irrelevant things. And we get a lot of traffic for those irrelevant things. So for example, with the forums, sometimes people will use xxx as a placeholder for their domain name. And suddenly the help forum ranks for xxx, which obviously they're looking for something else. They're not looking for the webmaster help forum. I mean, I'm guessing. But if you look at the overall traffic, it's like, oh, so many people are searching for our forums and coming to the forums. And when that goes away, it's like, oh, our website disappeared from search. And it's not that the primary content disappeared. It's just this big piece of irrelevant traffic disappeared. So is it OK to think that there is no benefit or penalty for having that traffic? I think there's, I'd say there is no advantage or disadvantage. It depends on what you want to achieve. Some people I've seen take this kind of situation and say, OK, I'm getting a lot of traffic for something irrelevant. Maybe I can put an affiliate link on there. And then it's like, I get this irrelevant traffic. At least they click on my affiliate link and that's fine. Other people say, I don't want this extra traffic because it ruins my analytics and they know index to page. So that's totally up to you. Awesome. I have another question, but I want to let anyone else if you want to ask some question. I have a quick question regarding robots.txt. So isn't it correct that when a robot throws a server error 500, that that will prevent crawling of the website? Yes, at least temporarily. So if we see it for a longer period of time, we'll assume it's a 404 kind of thing. And then we'll think, OK, we can crawl. And could you be sort of a bit more specific? What's a longer period of time? How much time are we talking about usually? Usually. So I don't know what is written in the spec, but I believe it's a couple of days, something around that order. But that should be in the robots.txt documentation that we have in the developer side. I think that was specified with kind of the change to making it more of a standard. I think they specified that. OK, great. Thanks. Sure. So me again. I'm consulting on a really, really big website that has our 10 medium index pages. So we found out that nearly half of those pages are irrelevant, low quality or near duplicate content. This is a really big website again. It has massive amount of traffic. It's been around in agency around 20 years. So we are thinking about progressively no indexing and getting rid of those shitty pages, let's say. Imagine a case when we no index 20% of the no index pages, considering that those pages don't bring any value of traffic and we don't lose any money, don't lose any significant traffic. Do you think we? Let me refresh that. How much time would you think we should wait until expecting any change if we should see any change? So I think that's generally a good idea, especially with a really large website where you can't double check the quality and improve the quality of all of these pages. I think that's usually a good idea. There are two aspects that kind of play into that. On the one hand, with regards to crawling, we'll be able to focus a lot more on the actual indexable pages, which means we can crawl them a lot faster, a lot more frequently. We can get the new content a lot faster. That's always a positive thing, especially with large and changing websites. And the other aspect is with the overall quality of the website. And that's something that probably takes a little bit longer. So my guess is if these are really low quality pages, then they're already not being crawled that frequently. So probably I'm just hand-waving a number maybe three to six, nine months until you see changes in the crawling. And my guess is a little bit later still, you would see changes with regards to the quality of the website. So it's really more of a long-term thing, especially with a really large website, because it takes a while for us to even notice that there's a significant change. Yeah, we thought the same, but it's always safe to ask. Yeah. I mean, the crawling aspect is something you can monitor fairly easily with the log files. I mean, depending on how easily the log files are. So yeah, that's something where we occasionally see that at SEO conferences people will do a case study and they'll show it's like, oh, I worked for this really large website and I removed 30% of the pages. And these are the results that I saw. It's never been the case that I've seen that if you remove a lot of cruft pages that you have a negative effect. I think there's always going to be a positive effect, especially for websites where things continue to change. So if this content is out there and it never changes anymore, then crawling doesn't matter. But if you're continuously putting new pages up and you clean out the old things, the unused things, then it makes it a lot easier. I understood. Thank you so much, John. Sure. Hey, John, I have one quick follow up to my previous question. Let's say costs is of no concern to us. We have the infrastructure and the resources to build out these country-specific domains. Given that costs is not a factor, would we expect any maybe inherent benefit to having such localized websites for international content that is both maybe in English but is somewhat nuanced to that geography but also is in that geography's native language? I don't know. It's hard to say. My general feeling is unless you're providing something specific for users in those locations, so not just in that language, but really for users in that location, then probably you'd be better off keeping it all on one domain. By doing things with country code top-level domains, if you're really providing content which is specific to their location for, I don't know, for example, maybe you have a chain of pizzerias and people are searching for pizzerias and they can recognize this is from my country, so it's more relevant to me, then that's something where it would make sense. On the other hand, if I don't know, your website has pizza recipes on it, then the location of the website is kind of irrelevant. Then the geo-targeting aspect there wouldn't really be that critical. And I think with a lot of user-generated content, it's not the case that it's geographically relevant. It's more that it's in their language and they can read it, which makes it helpful. But they're already going to be searching in that language, so those pages are already going to be relevant. It's not that we have to kind of add the extra factor of it's nearby to them. So in the case of, let's say, education, if something was related to courses that they may be taking would then, although maybe it might be tied to a specific university, but there still isn't quite the need to have any physical association with that school. It's just, oh, this is content from that school or from this course. Do you then think that that's enough nuance in the area of geography where it would sway a little bit more towards the pizza restaurant example, as opposed to the pizza recipe one? Yeah, I think that could kind of make sense, where if it's really tied to a physical location or even if it's not tied to a location, if it's tied maybe to a certificate, that's relevant in that country, that's something where kind of the geotargeting angle would make sense. Okay, all right, thank you. Sure. John, I completely unrelated question. Is there any chance or do you foresee in the future when Worldmaster hangouts in Spanish? Webmaster hangouts in Spanish? Yeah. I don't know. I don't know. I don't think we have a lot of people here that speak Spanish that would be able to do these. I don't know. Oh, you're with someone, you have to do. Whoops, okay. Maybe you can twist Gary's arm, he speaks a little bit of Spanish. He also does kind of more off the record hangouts at the moment, so you can join in on one of those. I think he mentions those on Twitter every now and then. Awesome, thank you. Sure. Add a question. Sure. We've been kind of debating internally a particular use of the XML site map where we have a lot of content and we have a particular set of content that we think is our best content and then our kind of medium content and our lower content, whatever. And there's been an internal debate about whether it would actually provide a benefit to that best content if we were only submitting that best content in the site map instead of submitting everything. Is there any validity to that? I don't think that would change anything. So the site map file is really only used for the crawling side, which is more the technical aspect of getting content into Google. It's not a quality factor, so it's not like an internal link where we would say, oh, like you want to promote this content. It's really just purely from a technical point of view, like this page changes, I take a look at it. It's not a sign that you're saying, well, this is a good page. You should show it more visibly in search. Sounds great. Thank you. Sure. All right, maybe we can take a break here to pause the recording. I can stay on a little bit longer if any of you want to hang around, but it's always good to kind of close out the recording at some point. Thank you all for joining in. Thanks for all of the many questions that came in. And I wish you all a great weekend and stay safe, wash your hands, do the usual things, take part in these virtual hangouts rather than joining a big group of people. It's always a good idea. All right, see you all next time. Bye. Thanks, John.