 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts, where webmasters, publishers, SEOs can jump in and ask their questions all around web search and their websites. Looks like a bunch of questions were submitted already. But if any of you who are here live want to jump in with the first question, feel free. Hi, John, can you hear me? Hi, Nick. Hi, if you wouldn't mind, it's 1 in the morning where I am, so I'm getting ready to for bed very soon. Oh, my gosh. OK. Anyways, I kind of submitted a question a couple of sessions ago, and it was pretty broad. So it wasn't possible to go too in depth, which I understand. So I'll try to frame it like this, I guess. The website that I run is basically, it has been around for quite a long time, over 10 years. And it was getting what I would call pretty consistent performance in search, until about the past year, where we were seeing precipitous drops. And now it's basically at 5% of the traffic it had a year ago. That being said, I do think it's algorithmic, because nothing is showing up under manual actions or security issues. One of the telltale signs that I briefly touched on is a fact that if I were to search the actual brand name, which it's a very specific sequence of words, like I could say with 99% certainty if somebody types in the brand, they're looking for my site. So the site is actually showing up on page 2 of the results when I search for the brand name. It's being outranked by the LinkedIn page or a Glass Door page. It's being outranked by other sites referring to the main website. But the main website itself is nowhere to be found on the first page. So I mean, that being said to me, that means like for some reason, the website ended up on a bad list or something. I don't know how else to describe it. But that's what it feels like to me. So if it would be OK with you, I would love to share just a bit more details in the chat window for you to go back to later. I guess for someone in my situation, like I said, with a very broad description like this, I mean, we're definitely trying to clean up some of the content. There's a lot of pages. It's a community site, so it's lots of user-generated stuff. But have you seen other websites go through something like this in the last year? I mean, changes can always happen. So I mean, seeing other websites lose traffic is something that we do see over time. And I guess it kind of depends on the website itself, the kind of content it has with regards to what directions it might take from there. So there are certainly situations where I'd say maybe the website model is not something that is as sustainable anymore as it was in the past, which is something that comes into play, for example, if you have a website directory where you're just collecting links to other companies and you have the phone numbers and addresses, but essentially, you don't have that much more information. It feels like that kind of model is something that that information is available now everywhere and pretty much every website, every company has their own website. So there is no need for these kind of, or less need, I guess, for these kind of directory pages. So that's one aspect where I'd say maybe the overall model is something that worked really well in the past because it was really needed in the past, but maybe things have changed over time. And then, of course, there are other kinds of websites where I'd say maybe our algorithms in the last year or so have been focusing more on, and where they're a bit more critical. For example, one area that I've seen mentioned the lot is the whole medical space, where I see people posting online, talking about authority, expertise, those kind of things where I'd say that definitely makes sense, where in the past, it was really hard for us to judge kind of the quality of a medical or medically-oriented site. And over time, our algorithms have gotten better in that regard. That's an area where I'd say maybe if you had a low-quality affiliate site that was focusing on these medical topics, then maybe you would be seeing changes there, even though perhaps over the last 10 years or so, you had a really good run. So that's another area where I'd say maybe from an algorithmic point of view, you might see bigger changes. But I really don't know about your specific case. So if you want to drop some more information into the chat, I can take a look at that and see if there's something more specific I can tell you about that. That would be great. Thank you. The medical website, it turns out. So no coincidence there. But yeah, I will share more details in the comments pertaining to our situation. OK, great. Appreciate it. Thank you. Thanks. All right. Anyone else that wants to jump in with the first question? Yeah, I can jump. OK. Yeah, I'm interested about buying persona. For example, in Google ads or Facebook ads or YouTube, when I set up paid advertisement, I can play with age, with your occupation, behavior. And for example, in YouTube, I was surprised when we provide the Russian videos about SEO, about marketing, about head SEO. And I was surprised that I thought that my audience is older than 25 years. And I thought it's the same male or female. But it's not like this. And our audience is less than 25 years, I think, like from 18 to 25. And yeah, of course, I limited to access to these ads. But what about Google? For example, if we have some keywords and we need to combine with user intentions, of course, I know about the geolocation. Google considers this. But what about other? I don't know, like behavior, age, and these parameters. Google consider, I don't know, to use different instruments, like male, I don't know, other. So I guess the question is, if we use demographic information for search results ranking, kind of. Yeah, I know that you use geolocation, local search. But what about the user behavior, their age, their interest, hobbies. Google combines this information with, I don't know, personalization data. I don't know. I don't think so. I'm pretty sure we don't have anything explicit in our ranking systems where we would say this person is a younger person. Therefore, we should send them to younger people, friendly websites. I don't think we would have anything specific there. It might be something that you would see from a personalization point of view on a per user basis. But I don't think we would have anything on a demographic level when it comes to search ranking ranking factors. It's an interesting question, though. Yeah, never thought about that. But I don't think we have anything specific in that area. Yeah, because in paid ads, you can play all the time, choose which audience you want to show your ads. But in Google, it means I need, because I can explain why I asked about this question, because we write blog posts, and we want to write information to our persona, to have our readers and to write to one persona, not to crowd. Yeah. And I mean, it is interesting how I can combine my blog posts with this persona and users in terms with Google. Yeah. I think that's something that you can definitely do. Not specific that we would use it as a ranking factor, but if you can target your users better and write for your user as well, then I think that's something where you'll have success regardless of how we use that as a ranking factor. I think it's a good approach to take those different things, especially if you have a YouTube channel that kind of aligns with your blog, then maybe you get some more information on the demographics of the users there. I suspect you also get some of that in Google Analytics or in other analytics packages, but I don't know the details there. But I see that all data are different, and all audience are different. On YouTube, on Facebook, on Google, I don't see one audience in a different mood, and pick up, eating fluids, yeah. And I mean, there's interesting about SEO. Of course, I can check in Google Analytics this data, but I want to provide value to my readers. If Google doesn't consider, I got it. OK, thank you. Yeah. Cool. All right, let's see what kind of questions we got. Patrick asks where his favicon is. I think that's one question that's been running around for a while. So we recently started showing the favicon in the search results, in the mobile search results, and it looks like people are trying different things out. I think, in general, that's OK. We have some guidelines with regards to what to watch out for, and we generally work to update the favicons automatically. So if you decide you want to change your logo, or the favicon, then that's totally up to you. I have a client website, which is in the plastic surgery industry, and the homepage contains a gallery of before and after photos to help sell potential results that they can achieve. I found the homepage is almost non-existent in the index when the gallery is on the page, and after removing it, the homepage ranks again. Can you elaborate on the level of nudity or skin showing the systems? Or what can we do to avoid this? I don't know about your specific site. That's my question if you have any specific questions. Cool. So I don't know exactly how it is with your specific site, but I've seen this kind of theme come up before with regards to, especially plastic surgery websites, because that seems to be where you tend to have that more. And in talking with the safe search team, that kind of tries to figure out which pages should be shown, and with the safe search filter on or with the safe search filter off, they feel that this is kind of working as intended. So when we see images where we're kind of afraid with regards to maybe the nudity or the amount of, I don't know, skin or, I don't know how you would frame it, with regards to plastic surgery sites, but essentially with regards to the images on the site or on the page, then that's something where our filters might kick in and say, we're unsure here. We're going to stay on the safe side. So that's probably what you're seeing there. And especially if you're saying that when you include the gallery or when you remove the gallery, you see these changes, then that kind of points in that direction. Cool. Also, that's what I thought. Thanks for that. Yeah. So with regards to the direction you can go from there, I think that's kind of up to you. If you feel that this gallery of images is really critical for your home page, then obviously maybe it's worth kind of taking that into account and saying, well, I want these images on the home page, and maybe my ranking with safe search turned on might not be as good. Or you could say maybe there is a middle ground where you say it's like you have a button kind of like show gallery, and it takes you to the gallery directly from the home page so that it's not exactly the home page that's affected. Yeah, that was my proposed solution. Sorry, thank you for confirming that. Cool. All right. Question about privacy policies, terms of services, contact us about us pages. If you set up a website and such pages are not published and a search quality radar rates your site negatively due to this, will the raters later re-judge your site when they're added at a later date? Is there anything the webmaster needs to do once these pages are added? So I think you're probably mixing up two things here. On the one hand, the search quality raters are people who do look at the search results and the pages that we show in the search results. And they do follow the search quality evaluators guidelines that I think we published or that we've been publishing for a while now. And that does include things like watching out for terms of services, privacy policies, those kind of things. However, those raters are there primarily to judge different algorithms that we want to test. So what usually happens is that one of the teams comes up with an idea and says, I think ranking would be better if we took this factor into account or if we ranked things slightly differently based on this new algorithm that they're working on. And then to double check that their suspicions are actually good, we generate a set of search results pages, essentially with an A version without that change and a B version with that change. And we send all of these to the search quality raters. And then the raters look at the search results. They look at the websites that are linked from there. And then they say, well, in this case, the change with the algorithmic change is better. Or in the other case, maybe the change without or the version before was better. And based on that, we try to refine our algorithm. So it's not that the raters would look at your site and say, this is a bad site. It should rank lower in the search results. It's more that the raters would look at the general algorithm and say, well, in this case, this version is better or this version is better. So what tends to happen there is that our algorithms kind of take the feedback that we give raters with regards to rating things into account. So that's something where potentially indirectly our algorithms could be watching out for things like privacy policies or terms of service about us pages. And that's something that if we were to pick that up, I don't know if we would have an explicit algorithm just for that. I think that probably would make so much sense. But it's possible that there is some effect from that taken into account in our algorithms. So if we were to look at that explicitly and look for those kind of pages, what would happen there is when we've recrawled the website, recrawled the pages, and we see that these pages now exist and they're linked on the website, then our algorithms would automatically take that into account. So there's nothing special that you would need to do if you make changes on your website in a direction where you think that this actually improves your website overall. Our algorithms would try to recognize a new state of the website as we find it and take that into account when it comes to ranking things. So you definitely don't need to kind of wait for some rater to come and visit your website again. This is something that our algorithms tend to do automatically over time. John, can I have a question on search quality as well? Sure. So this is something that some of the folks from the Romanian SEO community brought to me and it's regarding certain queries that after the March update results seem, how can I put it? I'll just give you an example so it might be easier to look at. So there's one very large retailer here that has a lot of pages indexed, including search pages. And Google seems to have indexed these kind of e-commerce search pages that contain all kinds of keywords. And they seem to show up and even though it's basically the same kind of pages, the content is almost the same because that's, for example, for bedsheets, a certain type of bedsheets. So you'll notice that the first three results are basically the same thing, just the keyword is singular, plural, things like that. So it's kind of the same thing and it's search pages. So I'm not sure if that's the best user experience so to speak. And this happens for a lot of, so I got a lot of examples like this. This is one more, a bit more extreme. It's like the first eight results are from the same website. That's kind of dominating now. I position some pages. Yeah, I bet they're happy though. But yeah, I mean. I'm just not sure is there really no other website that's good enough to go against three almost duplicate pages? Yeah, I've heard this before. I think specifically about Romanian websites, maybe some other country as well, where we, especially with some of the quality updates that we made, I think over the course of this year, earlier this year, where people have been complaining and saying that the quality overall isn't as good as it used to be. I don't know what the plans are there. I do know that the search team is aware of these issues. So it's something that they've also seen. They've also kind of seen the complaints that we've passed on to them. So it's not something where, from our point of view, I'd say this is the way it should be. But rather, this is maybe, I don't know, something that we could still be working on. OK. So the team is aware of this kind of? Yeah. I mean, these kind of search queries are really useful for us. So if you run across queries where you see this is actually really bad, or the search results are kind of OK, but they're filled with essentially copies of the same thing over and over again, then that's really useful for us to have. Yeah, I'm sure people are buying from that website. It just doesn't seem organic to have the same multiple copies, as you mentioned, basically the same page ranking, outranking everything. I'll leave you in the chat a link to my product forms page. If you have any updates. Cool. Sorry, can I follow up on this question? Sure. Isn't it like a rule that you should not index tags pages and search pages as well? You should index them. We do have a guideline that you should block search results pages, which there are two reasons for that. On the one hand, what can easily happen is that we crawl a lot of pages and we overload the server. And on the other hand, what often happens is that these pages tend to be lower quality pages in that they're not so useful for users as maybe something like a clean category page would be. With regards to tag pages, that kind of is a question there as well, is a tag page a search page? Or is a tag page more like a category page? And that's something that really kind of depends on the way that the website is set up and the way that the website tends to handle that. And for both of these, I tend to see it as something where we would see it as a strong recommendation to block these kind of search results pages from being indexed, but not as something where we would say, if we ever see a search results page, then we would apply manual action to that website. Because what we've also seen in a lot of times is that there are some really good search results pages where you look at them and you say, well, this is actually not an auto-generated page like you would just generate a page based on a million items, and you filter for keywords. It's actually more like a category page. And in those cases, I think it makes sense to have those indexed because maybe people are looking for, I don't know, blue running shoes, and you have a search page that's blue running shoes. Actually, it's a search page and not a category page, but it contains all of the blue running shoes that you have, which is maybe a good result for users. So if the search page provides value to the visitor user, that's OK to index these pages. So I feel there is a huge opportunity for manipulation to automatically generate these long tail keywords and blah, blah, blah. Yeah, yeah. I think that's something that has always been around and is always a worry. But for the most part, I think our algorithms are generally pretty good at recognizing the quality of the page. So if we see that a site is automatically generating search results pages and they're generating millions of them, then our algorithms would probably look at that and say, well, these are pretty low quality pages. Therefore, maybe there's more of the website that's also low quality. Maybe we should be more critical with regards to ranking the site overall. So that tends to balance things out a little bit. Sometimes things get by our filters or by our algorithms, and we show them in search results more visually. So we're not perfect in that regard. Thank you. Hey, John, I have some follow-up questions on that. So you said I come in, you look at the quality perspective, whether the pages are good enough to be so on the social pages. My question is regarding the listing or the category pages when you're talking about. How we can ensure that the quality is maintained on those pages really high? As I mean, considering that we are only going to provide the product details there, like I'm having thumbnails or price details and everything. So how can Google can actually decide or evaluate that perspective as in quality, which pages has a better interest as compared to others? So how we can diagnose that particular things? I think that's hard. There is no simple answer there, especially when you're talking about pages that are automatically being generated. That's something where you almost have to take your own expert knowledge of the topic area and apply that. So what I've sometimes seen is sites try to find other metrics that they can follow to get a light understanding of how users would perceive the quality. So I've seen sites use things like analytics information with regards to a number of visitors that actually go to these pages, the behavior of the users on those pages. Those are sometimes useful ways of looking at this. I've seen some really large sites say that any search results page, for example, that has more than, I don't know, two keywords in the query is something that they would automatically block. There are different approaches that you can take. It's not something where I'd say there is one simple answer that works for everyone. I think, overall, if you have good or reasonable category pages, then I would block the search results pages from being indexed just so that you don't have to worry about all of these fine details like, is this search results page good enough? Or is this kind of not good enough? And how do I quantify that? If you have clean, good category pages for your website, then you don't need to rely on the search results pages to also be indexed. Sure. Thanks, John. Sure. Can I jump in with the next question? All right. Hi. Sorry to jump into the second one, but I've got a website that I've been battling with for about the last six months. I've just posted the question and the link to the website. It's called rockend.com.au in the chat. So I have 3.0.1 redirected the home page. So if you go to rockend.com.au on any variation, it's 3.0.1 redirects. And I've completed the change of address from Google Search Console. Yet, if you do a site, rockend.com.au search, the home page and a lot of the other pages it still has is still in the index. And you kind of helped me diagnose that if whether there's something we can do now or if there's something we can do privately because this has been breaking my brain. That's probably normal. So the confusing part there is when we process a redirect like this, we still understand that the old site is associated with these new URLs. So what tends to happen there is with the site query, when you explicitly look for the old site, we'll say, well, we know this old site. And we still have a bunch of these URLs in our memory. So we'll show them to you because you're explicitly asking for them. And in practice, though, when you look at the pages themselves, for example, when you look at the Cache page or when you look at the page with the Inspect URL tool, you'll see that the canonical is actually the new URL. I don't know the Cache page. It's kind of hard to tell because you overwrite, I think, part of the top. But in general, what you'll see is that we've moved on to the new URL as a canonical URL, which means we've processed the site. But if someone explicitly asks us for the old domain, then we'll still show it. So essentially, it's working. It's probably working OK. I don't see it offhand, but it's probably working OK. And just because you're seeing the old site in the site query doesn't mean that the old site is still the one that's being indexed. So the simple way you can double check as well is to just search for the page title. And when I try that, I get the .com version. So I think your site move is OK. Just the site query is kind of confusing because, on the one hand, we're trying to be helpful and say, well, you're looking for this site, and we know about this site. Let's show it to you. On the other hand, you're looking at those results and saying, I'm trying to diagnose a specific situation based on these results. And that's not something that you'd be able to do with a site query there. OK, so that follow up to this then is I've also just posted it in chat as well. Is that the old website is still ranking for, for example, the keyword of strata management software when googled in Australia. It is in both position three on .com. Do they use the old domain and .com for the new domain? So this is the kind of main point of contention for the client, because we do SEO for them, and they currently have two different websites ranking when they should have had everything consolidated into the one. And the page in here is still for your wondering address. OK, let me just try that query. I can see the .com. Oh, OK, further down. So what probably happened is we just don't have that one URL moved over yet. So let me just double check. I mean, if it's redirecting, then that's generally not a problem. So my theory behind this is that it currently, if you click on the URL in search results for the .com. to.au, it 301s to .com.to.au again, and then it 301 redirects to .com. So it might be something broken there. But yeah, I still feel that it's in position three and six. So ultimately, clients aren't really complaining. But I want to make sure that .com is the only one in the actual search results at the end of the day. Yeah, I think what you could probably do is use the inspect URL tool and submit that URL to be re-crawled and reprocessed. That would probably pick that up. What I expect would probably happen here, though, is that extra listing that you have kind of in the sixth place with the .com.au would disappear in favor of the .com version, which I don't know. I mean, the ranking would be OK. I think that would be fine. It's just that you don't have three listings shown for that query and just would have two. So I don't know if you would be much better off by forcing that over, but it'll happen anyway over time. OK, so the website was migrated six months ago. And the .com.au, the old domain, is still in position three. So you're saying that if the change of address was completed successfully, the three you want to redirect is still in place correctly, even if there is a redirect chain, that eventually Google would pick it up. And if I want to force it across, I should inspect the URL in Google Search Console, the new one, and request the test live URL and request indexing on it. Is that correct? Yep. Yep. Awesome. Thanks for that. Cool. Sometimes a bit tricky with individual URLs, but site moves can take a bit of time. And depending on how things were set up and when they were set up, that might take a bit. So what I also notice is your site, I think, is being indexed with mobile-first indexing. So you need to make sure that the mobile version also redirects and has the canonical and all of that set up as well. Or maybe it's even responsive design, which would make it a lot easier. Yeah, it's responsive, and there's no end dot or subdomain or anything like that. So yeah, it also does have subdomains which are still active on the old domain. And there's all sorts of other things. But if you say that I can probably get the old results gone just by requesting indexing the pages, that will probably solve my problem for the majority of it. So thank you for that. Cool. All right, let me run through some of the other questions that were submitted, and then we can do some more from you all. We have an archive page that has quite similar images to child pages that are listed on the archive. The archive page is banned from a safe search, while the child pages are considered safe. What factors do the safe search algorithm use, apart from images and profanity, to consider whether a page is safe? I don't know what the exact factors will be. But we take into account a number of different things, and it can certainly happen that a part of your site is seen as being fine with regards to safe search, and part of your site is seen as something that we would filter with a safe search activated. So usually what I recommend doing there is making sure that you have a clearly separation of this kind of content so that it's easier for our algorithms to say, well, everything in this folder here can be filtered by safe search. Everything in the rest of the site or in this other folder here is generally family-friendly or safe search-friendly content, and we'd be able to show that appropriately. So that's probably the direction I would head there. It sounds like your content is kind of borderline with regards to how our safe search algorithms might look at it, which is fine. I mean, sometimes there is content out there like that. But if you want to take that into your own control a little bit more, then I'd recommend finding a way to separate those two parts a little bit clearer so that you have less or it's a little bit more certain with regards to safe search which parts would be shown, which parts would not be shown. My website gets hundreds of links that seem to be spammy. I suspect maybe one of my competitors is trying to decrease my raking. Do I need to keep disavowing these links week after week, or should I only be worried if I get unnatural links manual action? So in general, we do automatically take these into account, and we try to ignore them automatically when we see them happening. And for the most part, I suspect that works really well. I see very few people with actual issues around that. So I think that's mostly working well. With regards to disavowing these links, I suspect if these are just normal spammy links that are just popping up for your website, then I wouldn't worry about them too much. Probably we figure that out on our own. If you're worried about them, regardless, if it's something that you're not sure about, and you're losing sleep over these links, and you just want to make sure that Google handles them properly, then using the disavow tool is perfectly fine. The disavow tool is not an admission of guilt or anything like that. You're essentially just telling our systems, these links should not be taken into account for my website. And there are multiple reasons why you might want links not to be taken into account. And that's not something that our algorithms would try to judge for your website. So if you're seeing spammy links from certain sites, using the domain directive makes it easy to handle these in the disavow file, and you can just submit those there. On the other hand, if you feel that these links are pretty normal, spammy, and something that any algorithm would figure out, then you can just leave them alone and just kind of move on. I think for most websites out there, pretty much the really largest majority of websites, you don't need to use a disavow tool. And that's also why we have the disavow tool. So separate from Search Console so that you don't get tempted to using the disavow tool because it looks like this normal part of Search Console that everyone should be using, but it's really something that you only really need to use in really extreme cases. We received a notification from the trust and safety team regarding low-quality search results pages on our site being crawled and indexed. While we're working to solve this problem, is there any short-term risk of a manual penalty being applied, an algorithmic penalty, or Google taking any other action to remove those pages from the index? I don't know which notification you received there, but in general, when you receive a notification like this, it's because someone from the manual website team has taken a look and seen that there are real issues with regards to those search results pages. And usually what also happens at the same time then is that a manual action is placed on specific URL patterns from your website so that we don't index more of those pages. So probably if you received a notification like that, then we're already trying to filter those out of the search results pages. So that risk that you're worrying about there has probably already happened. Solving that and doing the reconsideration flow to remove that manual action, I think, makes sense regardless. Like I mentioned before, with search results pages, there are two aspects there. On the one hand, the quality side, which sounds like you ran into there. And on the other hand, the technical side that we end up crawling a lot more pages than we actually need to. And both of those can play a critical role for some websites. So that's something where even if a manual action is already in place to kind of take these pages out of the index, I certainly try to find a way to clean up the technical side as well so that we don't have to crawl and index a ton of different pages just to keep up with your website when these pages wouldn't actually be that useful for users anyway. John? Yes? Yeah. I have some follow-up question. This is regarding a manual action. So I'm recently working on a website. So this is what happens. Like I'm kind of taking a charge of the website. But though it is targeting the international market as well. We are in all this country, targeting with us up folder-wise. And the thing is, like I mean, I was going through this manual action thing. I just noticed they having a manual action for unnatural links, which is affecting some pages of the website, websites, or either the third party, maybe blogger, or something like that. Apart from that, we don't have any backing as search, but I just wondered, what is the reason of having a manual action for those kind of backlinks? Apart from that, I'm seeing this manual action notification in every country site. I mean, I have verified individually different folders for different countries per site. So I'm getting manual action for each and every country. So having a manual action for the main.com will going to affect my other subfolder structures as well. So are they subfolders? Or are they different domains? Yeah, they are subfolders. OK, yeah. I mean, if they're subfolders, then that would kind of make sense to show that because it's essentially the same domain. So showing the same manual action there, I think would make sense. There are two places where I remember where we take this kind of targeted manual action with regards to specific pages and specific links on a site. And on the one hand, that could be if a site is buying links, for example, for very specific elements within the site or doing something kind of sneaky with regards to links in that regard. So that's something where we sometimes take very targeted action and say, well, we'll just ignore this specific set of links because we know that it's pretty well-defined. It's something that we can isolate fairly well. And we'll just work to ignore those on a manual basis. Another place where I've also seen that happen is specifically around reputation management. So that's one area where we'll also often see that people build, well, people or usually SEOs or reputation management companies build up links to pages that they find favorable. And these might be completely legitimate websites. They're talking about maybe one person or one company in a reasonably favorable way. And in order to try to hide the less favorable content in the search results, they'll go up and buy links or build links for specific pieces of content on essentially legitimate websites. And from our point of view, that's also problematic because these are unnatural links to those pages. And in order to resolve that, sometimes when we have to do that manually, we will do a manual action specifically regarding those links and those pages. So that's something where as a website that might be affected by something like this, there's nothing really that you need to do there. It's not that you need to kind of clean up those links that someone else placed that are pointing at articles that you have on your site. But rather, we've kind of taken care of that on our side with a manual action. So from our point of view, if this is isolated like that, then that wouldn't be affecting the rest of your website. It would really be affecting that kind of set of links, specific set of links pointing at a specific set of pages on your website. And we're essentially just neutralizing those so that you don't have to worry about that or do anything with that. Sure. Thanks, John. On that, I mean, can we get a set of those examples? Basically, I mean, we don't know what are those pages actually and if we know, like, I mean, if somebody knows who's actually building the links, they know for what pages they're actually targeting. But when it's coming through the third party or anyone is building back links for any third-party website, so we don't know. So we are just kind of in a very, you can say in a state, I mean, what pages to diagnose. Or I mean, we have a very huge website. Millions of pages are there. And we don't know, like, I mean, where it is getting affected. Yeah, I know the manual website team tries to provide examples whenever they can. But sometimes that's not that easy. So the usual recommendation that I'd have there is to file a reconsideration request and give some information on what you're seeing or what you're kind of confused about. And sometimes the manual website team will be able to come back to you and say, well, it's affecting this set of queries or this set of pages on your website or on the external website. So sometimes they can come back with that kind of information. It's not guaranteed that they'd be able to do that. OK, yeah, thanks. Hey, John. Hey, John. Actually, I checked my Google pages feed in site recently. And I saw that the server response time was around 1.5 seconds, which is very terrible. So I was thinking to try some new hosting or move my hosting. So my question is, is changing the IP affect the ranking or not? No, changing IP assigned. OK, one more thing. Like, this is my fourth time, same question. The images are still not indexed. And I am unable to find any reason. Like, I am serving the latest version of the image and everything is fine. But still, the images are not getting in the Google search results. And the whole console was able to show us whether the images are indexed or not. But in this Search Console, there is no option from which we can look at it. So what can I do right now? It's been more than one month. What do you mean more than one month? The images, all the images are de-indexed. Only the author images which are posted on the site is on Wordpress. Only the author images are available on the Google images. And there is no other images available on the Google. And that's why my ranking is affecting and it's getting down day by day. That would only affect traffic from Google images, though. So the normal web search ranking wouldn't be affected by that. But if you can copy the website and some sample images into the chat here, I can take a look at that afterwards. OK, I have told you the site. You can check the comments, technoxyz.com. And I think this is my fourth time telling you. OK, cool. T-E-C-H-N-O-X-Y-Z.com. OK, yeah. There is no manual action and no messages in the Google Search Console regarding this. All right, in reference to recent announcements on mobile-first indexing for new sites, does this come into play for new sites or only for new domains? Important to know, since the default states for new websites will be mobile-first indexing, since I'm considering redesigning my site. It's specific to new domains. So essentially, what is happening is we're shifting from a model where we have a list of all the domains that are already on mobile-first indexing to a model where we'll have a list of all the domains that are not on mobile-first indexing. So anything that is new will be processed with mobile-first indexing by default. So if you're doing a redesign, and if you're launching or revamping on a separate folder on your website or even on a separate subdomain of your website, all of those would still be seen as being part of the old domain. So whatever your old domain status is would play a role here. So if it's already moved to mobile-first indexing, then that redesign will be moved as well. If it hasn't moved to mobile-first indexing, then that redesign, that change on your website, would also not be under mobile-first indexing. And as we've talked about before, it's not that mobile-first indexing is a ranking factor. So it's not that you need to be in mobile-first index in order to be competitive, but rather this is more of a technical thing on our side with regards to how we crawl and index the content from your website. I have a publisher who has a site on WordPress that considering disabling a heavy plugin to bots only to improve page speed scores, the plugin makes no visual changes. Would that be advised or not recommended? If you have a plugin that doesn't make any changes, then personally, I would remove that because usually the less overhead you have with regards to code that's running on your server, the less you have to worry about. So specifically with regards to maintenance, but also speed, like you mentioned here, those are two aspects that I think are pretty critical. So if you have a plugin that isn't doing anything useful for your site, then maybe it makes sense to disable that. Working for a publisher with regards to URL structure, we're using very short URLs, like domain.com-1234. They're easy to share. But the URLs for news articles don't contain keywords or similar information. Many SEO agencies have pushed us to have speaking URLs, so with keywords in them. In case the speaking URLs get generated by titles, which contain numbers that change after a while, the editor may typo. A few minutes later, the URL could change as well. This is why we decided to use just IDs. Which URL structure would you recommend for news articles? And is it advisable to change the whole URL structure to speaking URLs, though it would be necessary to implement redirects for all existing articles? So generally, in talking with the team on our side, they've always said that you don't need to have keywords and URLs, and they're essentially equivalent on our side. So we use URLs primarily as an identifier. There is a very small factor with regards to keywords in URLs, but if you're working on a larger website, then that's something where you probably won't see that effect at all. The bigger thing that I would watch out for here, which I think is the one that you're worrying about, is if you change from one URL format to another, what will happen then? So in general, what happens there is that we need to re-crawl and reprocess the whole website in order to understand what the new status is. So that's something where there's definitely a longer period of time where you'll see fluctuations across your website until we've been able to kind of settle things down again and understand that this new URL on your website is linked from these different parts of your website. And the context of these pages is like this or like that. So you definitely see a fairly reasonably long time of things in fluctuation there. It's a lot harder for us to reprocess something like this than, for example, a site where if you move from one domain to another and you map your URLs one to one to your new domain, then we can process that pretty quickly, sometimes within a day or so. But if you change all of the URLs across your whole website, then that's something where I imagine you'd see changes in the order of several months until everything has settled down. So that's kind of the one aspect there and that you'd see these fluctuations. The other aspect that you might see a very minimal change in the end, it kind of also comes into play there. So that's more a question on your side. Are you willing to take into account that maybe for a couple of months, things will be fluctuating quite severely just so that you have this potentially improvement with regards to some keywords in your URLs? So that's just kind of tricky. I don't know which direction I would take there. If you're sure that keywords in URLs make sense, for example, if you can separate pages on your website out better based on folders and you can separate things out a little bit clearer so that you can track specific parts of your websites easier. So kind of like you mentioned here, you have slash news slash politics. If you want to track the performance of your politics content, then having that in a separate folder makes it a lot easier compared to everything kind of in a flat structure where you just have one number. So from that point of view, I try to find the different pros and cons to the new structure and the old structure and think about how many pros do you need to kind of cancel out this potential multi-month time of fluctuation? Or maybe you're saying, well, this is actually more of a long-term goal. And we'd like to get to kind of more keywords in URLs, but we don't need to go there from one day to the next. And we have maybe the engineering resources to do that incrementally. Then that might be something where you'd say, well, all new content gets these new format URLs and the old content stays on the old ones until a time where we're sure that things are working out with the new URLs. And then at that point, we migrate the old URLs as well. And if it's a news website, then usually the older URLs are less visible in search anyway. So if there's some fluctuation with regards to how the old URLs are shown in search, then that's probably not something that would be critical for your website. So that might be another approach to take, to kind of take it incrementally to a state where you're saying, well, this is more of what I'd like to have. Oh, wow. We still have a bunch of questions left and almost no time. Let me see if I can run through a bunch of these and we can run over a little bit as well. You mentioned hidden content and ranking and not being able to use that for snippets. With regards to tabs and mobile-first indexing, I don't know what the current status is there. I need to double-check. My understanding is that with regards to mobile-first indexing, we would see this as content that's normally visible as well. But I need to double-check, not 100% sure. Working on e-commerce sites that is regularly changing some category pages to new ones, leaving behind a lot of blank pages with 200 result code. The behavior can't be changed, so I need to do something there. So I'm thinking of redirecting them through Google Tag Manager with a JavaScript redirect. Do you think that would work? Or which one would you prefer, JavaScript or MetaRefresh? I think JavaScript redirect would be preferred there. We probably picked that up. But especially if you're going through Google Tag Manager and the JavaScript redirect, then that's something that would take a bit longer to process than a clean server-side redirect. So depending on how big of a problem this actually is for your website, it might be worth trying to find a way to do that server-side if this is more of a cleanup thing that you think you'd like to do in the long run, then maybe that's OK to just leave it like this. I social bookmarked my site. After one week, I noticed that the site which I bookmarked has me marked a spam. So do the pages have any effect on my ranking? I don't think those pages would have any effect on your ranking, because probably we're also ignoring those social bookmark sites, the links from there. That's a really, really old SEO strategy, and we have a lot of practice in recognizing those kind of links and just ignoring them. When will the speed and search console tool get updated with the evergreen Googlebot? Any timeline? I don't know. I know the team is working on that, so I wouldn't expect it to take too much longer, but I don't have any specific timeline. Question about Google News. I saw a big drop in news since the second week of March and kept declining ever since. I don't know about Google News and how that would play together. With regards to being penalized from Google News, I suspect if that were the case, then you would not see your site at all in Google News. It wouldn't be that it would just be ranking lower in Google News, but I don't really have much insight into the Google News site. Can you confirm that links, top link pages externally in Search Console, does not include 404 URLs linked externally? I think it would be very useful to discover broken links from Search Console, something that we can only do with external tools. As far as I know, the linked pages would not include links to 404 pages, at least not to 404 pages that we know about. Because what needs to happen with regards to links is we have to have one page that is linking, like where the link is coming from, and a link destination. And if we don't have either of those, then essentially that link is not something that we can track in our systems, or that would be useful to track in our systems. So for that, probably using external tools is an option. Is it a good idea to use my PPC landing pages to rank for local areas? Maybe. That's kind of up to you in what kind of landing pages you have, but what I've seen sometimes is people using PPC or similar topics to test landing pages and see how to create landing pages that perform really well, and then to update the normal search ranking pages as well to kind of that newer model. Half my pages are shown and crawled, currently not indexed. It's quite a long time now. I'm guessing it's because of low quality content, but I'm not really sure. Is there any way to check the reason why it's not indexed and how to resolve that? No, there's usually not really a way to double check the reason why it wouldn't be indexed. It's completely normal that we don't index all pages on a website. So just because you see a large chunk of pages that are either crawled, currently not indexed, or I think the other one is submitted and not indexed, that can be completely normal. If you're worried about the quality of your website, then usually that means there is something with regards to quality that you can improve. And we do try to take into account quality when it comes to picking which pages we want to have indexed. So that could be something to focus on as well. Obviously, that's kind of tricky, because if we're not indexing those pages, we're not taking the quality of those pages into account either. So you need to focus on the quality of your website overall, not just the quality of the pages that are currently not indexed. Can anyone tell me how different is hosting from a free hosting, like Blogspot, then from a paid hosting? Will it have any effect on my search results ranking? Is Blogspot considered inferior to hosting with WordPress? So these are all different ways of hosting a site. Usually what the big difference there is with regards to the free kind of platforms or the simpler platforms and something like WordPress is with regards to the flexibility and the customization opportunities that you have there. So if you just want to have a website out there and you just need to have your content visible at all, then using any of the platforms out there is usually a good idea. But if you need to do something very specific, if you want to track things in a very specific way, if you're, I don't know, collecting leads in a very specific way, if you need to do something fancy with regards to multilingual users or users in different countries, then that's something where having your own hosting, maybe on WordPress, maybe on some of the other platforms out there, can make a lot of sense. So that's usually the way that I would look at it there. If you're not sure, what I'd recommend doing is setting up your own domain in any case. That's something that you can do with most of these free platforms. And when you're using your own domain, if you later switch to a more customizable platform like WordPress or anything else, then you kind of keep all of your content and keep all of the URLs that you can just move on to the new platform. So that makes it a lot easier to migrate. And that way, you can kind of work with the free platform. You recognize where the limitations are, where you'd like to do more than what is possible. And if you find a different platform that offers all of those opportunities that you're looking for, then maybe at that point, it makes sense to move on. But it's certainly not the case that there's any inherent SEO advantage to any of these platforms. Your content is hosted. We can access it. Users can access it. So from that point of view, that's all fine. Is it a good idea to block a certain URL from being crawled? Or is it better to let Googlebot decide and use and configure your URL parameter tool? The URL parameter tool is a great way to give us hints about which URLs or URL patterns you don't want to have crawled. But it's not the same as a robot's text file. So if you need to block crawling, for example, if your server does something that takes a lot of time to be processed when we access the URL, then that might be something where you want to use the robot's text file to really be clear and say, I don't want you to access these URLs at all. How can we report manipulated structured data? This spam report page has no relevant category. There's a structured data or rich snippets spam report form that you can use. For single pages like about us, is it recommended to use rich snippets for this single article on the page or not? Totally up to you. That's something like an about us page. If you have a structured data type that fits that, I think that's fine. I don't think you'd see any special handling in the search results with regards to using just article markup, for example, on a page like that. Our website has 140 language and country localized versions all connected correctly with hreflang, but Google is often showing the wrong versions in the search results. Is there any limit to the number of hreflang alternates that we can include? Does Google treat this as a suggestion rather than a strict directive? There's no limit to the number of alternatives that you can provide. However, there are practical limitations in the sense that if you have 140 copies of all of your content, then we have to crawl and index all of those copies, and we might not be able to crawl and index all of those copies, depending on the website. So that's something that sometimes plays a role there. Does Google treat this as a suggestion or directive? We treat this more as a suggestion. It gives us a little bit more information about those pages. We can't treat it as a strict directive because we really need to make sure that it actually works out well. Rather, it's not that we can't. Most of the time, when we don't take that into account, it's more that there are technical reasons why we don't do that. So for example, maybe clean backlinks, canonicalization plays a role there as well. Then the general setup of the hreflang constellation is something that often plays a role there. So it's not that I'd say we treat it as a subtle hint, but more as something where I'd say there are a few things that need to align for us to be able to use it properly. And just because we see the hreflang link on one page doesn't mean that we will always show exactly that link. We really need to make sure that the rest of the signals also align before we can show that hreflang URL in the search results. And especially if you have 140 versions, then that's a lot of versions that can kind of subtly fall out of alignment. So I imagine, especially with a bigger setup like that, that it gets tricky. I don't know which website this is, but in general, I'd recommend using fewer variations rather than more variations. So in particular, if you have the same language content for a number of different countries and you don't really have anything specifically country-dependent on those pages, then I'd recommend folding those together and just having one really strong language version rather than having all of the individual country versions. Obviously, for larger websites, that's a lot easier said than done. OK. I imagine there are still more questions that got submitted. We're way over time now. So I'll set up the next batch of Hangouts. You're welcome to drop things there. Or if there's something more general that you'd like to have answered, we're doing another set of Q&A videos. So for those, you essentially just need to tweet the question at Google Webmasters or the Google WMC account with the hashtag Ask Google Webmasters. And we'll be picking out a bunch of those questions soon. And start recording videos. And the aim is to do that more on a regular basis so that it's not just like one-time shots, but actually that we do this on a regular basis. And these are specifically more, I guess, general questions that are easier to answer where the question fits into a tweet, which are not site-specific and where we can kind of formulate a general answer that would be useful for everyone. So you're welcome to do that as well. And of course, there's also the Webmaster Help Forum, where a bunch of volunteers spend their time helping out. And they have a lot of experience with a lot of these Webmaster questions. Pretty much everything that comes up in the Webmaster Hangouts, they've seen as well. So if you drop questions in the forum there, that's also a good place to get advice. Or if you've been following these Hangouts for a while, you can probably also help out in those forums and help to answer a lot of the questions that are dropped there, too. OK, so I think with that, let's take a break here. It's been great having you all here. Thanks for all of the tons of questions that were submitted this time. And I wish you all a great weekend. And hopefully we'll see each other again in one of the next Hangouts. All right, bye, everyone. Thanks, John. Bye. Have a great day. Thanks, John. Bye. Bye. Thanks, John.