 All right, welcome everyone to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts where webmasters and publishers and SEOs can jump in and ask any questions that are on their minds with regards to web search. As always, a bunch of questions were submitted. But if any of you three or two, oh my gosh, remaining folks want to ask a first question to get started, feel free to jump on in. I could start on this. I probably should wait for it. Go ahead. OK, hi, John. Hi. I submitted a question regarding facets in Drupal. I don't know if you already read the question or not. Just very briefly. Yeah, so the thing is what I basically wanted to know is Drupal, when you use facets for e-commerce filtering, for example, each link, each filter is a link with a nofollow attached to it. And I was wondering if this hurts SEO in some kind of way or another. I think in general, that should be OK. So the important part for us from these category pages is that we can reach all of the product pages from there. So as long as we can get there by crawling through the pagination normally, then we probably don't need to also look at the facets. If, on the other hand, there are products that only become visible when you activate some of these facets, then that would be a case where we probably would need to crawl through those facets to get to those products. But I think for pretty much all e-commerce sites that I've seen, that should just work out. OK, and let's say there are 10,000 or even millions of different filter combinations. Doesn't that hurt the crawl budget in some way? Or does a nofollow is using a correct way and crawl budget isn't lost on these pages? I think that'd be a good approach there. So I don't think that would cause any extra problems. OK, thank you. Sure. Hi, John. Around the indexing bugs or indexing bug that's resolved and now is impacting Search Console, I think that'll be fixed fairly soon. But there's lots of small bugs that seem to be lingering maybe around that, like the cache dates, the rich results, the rich rat recipe results, mobile friendly bugs. They're all unrelated, or it's hard to say what's related to the indexing bug, because obviously indexing is kind of core to all these other things working right. Yeah, I don't know. So my general, oh my gosh, I don't know, like OMG. I can't say anything anymore. Carol, you guys. Yeah, so my general thought is a lot of these, like you said, indexing is pretty core. So if there's anything broken with indexing, then it's probably worthwhile to wait to get that to settle down first before assuming that all of these other issues are completely separate. So I suspect some of them are separate, like the message I've seen from a bunch of people is around the mobile friendliness thing, where people are getting notifications that their site isn't flagged as mobile friendly, or some of their pages are. I suspect that's something that's unrelated, because it's something I've seen from earlier. But some of the other issues, especially if they've just come up in the last couple of days, I tend to wait until the indexing things have settled down before assuming that they're completely separate. And how long do you think you need to wait? Are you going to wait a week or so? I don't know. At this point, I'm afraid to make any predictions. I hope that this is something that will be resolved fairly quickly, but sometimes these things take a lot longer than we expect. But it is definitely something that we're taking fairly seriously and making sure that when it's resolved that it's really resolved and that we can notify folks when we think that either we have a clearer timeline or we've been able to resolve it. So I wouldn't assume hours, but I think weeks would also be too long. So somewhere in the area of days, it's really hard to say. As long as you can, you'll send out a tweet or two when you think it's fully resolved, or if anything is lingering. The final question is, can you anyway give us some type of estimate in terms of how large what type of percentage of the index was impacted by this? I know Moz came out with their metrics, but they only looked at 23,000 URLs or something. So you guys have a lot more URLs than 23,000 URLs in your index. A few more. I mean, hopefully, we keep them. I don't know. It's something where we like what happens with a lot of these situations when something breaks is we do a kind of a post-mortem where the team that's involved looks into what went wrong, kind of the steps that led to that, where we got lucky, where we got unlucky, where things kind of ended up going even worse, and what the overall impact was. And usually, that's something that the teams do when the issues are resolved. They start on that, obviously, when things are being worked on, but it's something they tend to work on afterwards. So that's, from my point of view, I first need to wait to see what they come up with, and then it's a matter of, is this something where we won't be able to talk about the numbers or we wouldn't be able to talk about the numbers? Like, what kind of numbers would be reasonable to even mention? Those are kind of open questions there. OK, so that makes sense. It hasn't been released yet, internally. And hopefully, you'll be able to mention something in the future. Are they talking about any type of way to apologize to the SEO publisher, webmaster, I have no idea. I know there are lots of people involved. But I mean, technical issues happen. They happen to all websites. So it's something that is kind of awkward for all sites that are involved, but it can happen. It's not that we're pushing buttons to kind of see what happens. No, of course, mistakes happen, obviously, yeah. No problem. OK, thank you very much for your time. Sure. All right, any other questions before we jump into the ones that were submitted? Yes, hi, John. Hi. So I'm using a CDN, and I've got a lot of traffic coming from something that the CDN classifies as fake Googlebot. And it is ending with googleuseraccount.com, which means that it uses the Google Cloud Platform. So it's not part of the documented Googlebot variables that are there because they end up with something else. But it's interesting because of the traffic, it behaves exactly like Googlebot. And I'm seeing that across several clients, by the way. So is it possible that this is some kind of new Googlebot that CDN walking is thinking that's a fake because it was not part of the documentation on the Google site or it is someone that's pretending to be Googlebot? Because I know that you are testing some new things out there, so I'm worried that maybe it is related to some kind of metrics that you're going to use in the future or something like that. In general, when we try new things out with new user agents, we still try to make sure that there is clear as possible. So that's something where, especially if you're seeing something that points to kind of cloud hosting, then I would assume that that's essentially some user who is running a service on the cloud hosting setup and kind of doing it from there. In general, when we do roll new things out where we have a separate user agent, we try to be as clear as possible. We also try to update that user agent page in the Help Center. Sometimes that takes a bit to be updated, but it should at least be visible somewhere around there. So it is safe to block that? I mean, you can also do a reverse lookup to see if it maps to the Googlebot IP addresses. And if it doesn't map to the Googlebot IP addresses, then you can do whatever you want. OK. Cool. OK. Let me. One more question. Hello. All right. Go for it. Actually, one of my site's images, all of them are de-indexed, except the author images. So what should I do and how can I check what is the reason behind it? Actually, the old Google Webmaster shows us that how many images are indexed, how many I have submitted. But there are no option in the latest Google Webmaster. So how can I solve this? So for images, we need to have both the image file and the landing page index. The image file wouldn't be indexed in web search. So if you use something like the inspect URL tool, you wouldn't be able to see the image file listed there. I believe in the inspect URL tool, we'd also show that it's blocked or somehow not available for indexing. And that's perfectly fine, because it really only tests for the web search side. From the images side, if you're seeing that all of the images of your website are de-indexed, then that sounds, to me, more like a general problem and not something specific to one individual URL. So I double-checked the robot sex file. I double-checked to make sure that you're not blocking Google images in some other way. I double-checked the manual action section in Search Console just to be sure that there is nothing from a manual point of view that's affecting there. But those would generally be the kind of the directions that I would take there. Also, there is no manual action. And other articles are ranking very well. And I am getting more than 100k traffic actually from Google. But some of the articles that are related to images like DP and things like that, for example, they need images to be indexed and ranked so that I can get the ranking websites to. But recently, I saw that the rankings are gone and none of the images are indexed. The thing that I changed recently is the only thing is that I use the WebP version of the images, like you suggest in the Google Spade Console when we check that we should use JPG latest for version or WebP version. So that's what I only changed in the recent time. And the ranking you're looking in, web search or an image search? For image search, I was ranking pretty well. Like in the images, if you search for those keywords, I was coming on the top. But right now, there are no images indexed. Like they are all gone. So that's the problem for me. I can't find anything like HTTP response code is good, 200 OK, and there is no problem in that. OK. Can you maybe add your URL to the chat on the side here? And I can take a look at it. Can I tell you on the call? Like I am on a hangout. Sure. E-E-C-H, E-E-C-H. So what was that again? E-E-C-H. OK. N-O-X-Y-Z, technoXYZ.com. All right. OK. I'll take a look afterwards. Thank you. Thank you. All right. Let's see. OK. So there's a question about Google Jobs. All of the documentation shows the logo should come from our Knowledge Base card, but we're not seeing that pulled in. We tried structured that as well with our posts and sites, and that also doesn't work. Do you have any ideas or someone who could take a look? So I guess the question is, if this is something that is specific to the logo only, or if it's an issue with the jobs in general. So my recommendation there would be to maybe start a thread in the Webmaster Help Forum with the details that you have there. So in particular, some sample URLs where you see this problem, the way that you'd like to have it displayed, the way that it's currently displayed, so that someone can take a look at that and see is there a mismatch with the structured data types or with the type of structured data that you're providing with regards to maybe the sizes or the connection between the logo and the individual job entries, all of those kind of things. But that feels like something that's kind of hard to look at on a one-to-one basis, like live and hangout. Is there any benefit in having a schema in just one JSON file showing, for example, organization, author news, article reviews, rather than in separate snippets? So in general, the structured data on a site should be specific to the primary object of each page. So for example, if you have a product landing page, then the structured data should be specific to that particular product. And if you have a different product, then the structured data should be specific to that particular product. So from that point of view, having one set of structured data that you apply across all pages of your site generally would be incorrect. And what might happen even there is that if the Web Spam team or the structured data team were to run across this, then it might be that they say, well, this structured data is implemented so incorrectly that it's best if we turn it off for this website. And we disable all structured data, kind of the showing of that structured data in the search results. So my recommendation there would really be to do that separately and make sure that per page you're showing the appropriate structured data, rather than just one set of structured data for all pages of the website. How do we add online auction available on Search? Most of the time, the options expire by the time Google indexes our page. So in general, that sounds like a situation where it's hard for us to find these pages on our website. In general, that sounds like a situation where it's hard for us. So that sounds like a general technical issue on the site in that we're not able to crawl and index your new content as quickly as would be important for your website. So there are the various things that you can do there. One thing is to maybe look at reducing the number of URLs that you're providing on your website so that we have fewer URLs to crawl. Another thing is making sure that we can reach those URLs as quickly as possible, so maybe linking the newer options in a more visible place within your website so that we can find them a little bit faster. And in general, as well, is making sure that there are no technical issues that are blocking us from crawling more or faster from your website. So making sure that the server is really responsive and reacts quickly, that it's not returning any error codes, that we're able to fetch this content at the rates that you're producing it. So that's kind of the general approach there. And sometimes that's easier said than done. Sometimes if you're providing a lot of really new content across your website, then that's a bit tricky for us to actually pick that up as quickly as we can. Using a sitemap file is a good way to let us know about new URLs on your website as well. So that's another option there. You go on in the question to ask about the event structured data and kind of like, could you use an event structured data for auction events as well? And from my point of view, as far as I know, events structured data should really be for physical events that are kind of in-person things where at a specific date, at a specific time, this event is taking place. And people can go to that location and take part in that event. So for online auctions, that wouldn't really be fitting for sales or similar things that also wouldn't be fitting. So I don't think the event markup would be the right thing to do here. Also, the event markup itself wouldn't encourage Google to index these pages any faster. The second part or second question is how do we index listing pages higher than the product, listing pagination pages, or listing sorting pages? How do I make sure that my main page ranks higher? Like, essentially, how do I pick which URL is shown in the search results, I guess? And again, this kind of comes back to the general question or the general theme that we looked at for the first part of the question is we need to be able to crawl and index the site in a structured way, in a reasonable way, that we don't run into too many duplicated areas, that we don't waste time kind of running through all of these filters and pagination settings, that we can really focus our crawling and indexing on exactly the content that you want to have crawled in. And the clearer you can make that within the structure of your website, the more likely you will be able to follow your lead and be able to kind of reach into your site and index the pages at the right time when you're producing them and show them in the search results so that users can get there. Let's see, there's some questions in the chat as well. Why does Google still give preference to expired domains? My competitors buy domains which are about to expire or have expired and they have good links. These sites then rank quickly even if they have irrelevant back links. We don't give preference to expired domains. So I think that part is something that is pretty clear from our side. It's also something that I know the web spam team and the quality team do watch out for this kind of behavior and they do take manual action where appropriate. So this is something where I wouldn't say just because some competitors are doing this that they're actually profiting from that. That said, sometimes people get away with doing really sneaky things and we don't catch up until it's maybe too late. So if you're seeing things like this and they're showing up in the normal search results, by all means, please submit that with the web spam report form so that we can take a look to see what is happening here. Are we missing something specific? Is this something where we need to be taking more manual action? Is this something where we need to maybe algorithmically recognize these kind of changes a little bit faster? And from that point of view, it's always useful to get this kind of feedback. Is self-referencing canonical tags kind of a best practice? If you don't have related pages, your website is small. Self-referencing canonical tags are perfectly fine. There's nothing wrong with that. They help, especially if you have a static HTML site where you can't watch out what URL parameters people are using to access your pages. The self-referential canonical link element really helps us to understand what the primary URL is that you want to have crawled in. So from that point of view, that's perfectly fine. A question about structured data with schema in our company, we have a site that is technically like a blog. Is it a good practice to mark up the pages with two or more types of structured data, like product and news? Or might this be something that confuses Googlebot? So that goes back to kind of the general where I mentioned that the structured data should be specific to the primary element of your pages. So if the primary element of your page is a blog post, then that's what the structured data should be about. If the primary element of the page is a product, so from that point of view, I would try to focus on one specific type of structured data and just use that. If you have multiple kind of sub-elements of that piece of structured data, that's perfectly fine. So a common use case is a product that might have some types of structured data for reviews or aggregate reviews. So that's kind of the direction I would take there. I wouldn't just randomly put multiple pieces of structured data on the same page. All right, I think we had that one. And the facets, after the March core update, what is Google looking for? An article, page views, internal links, external links. I see a lot of organic or authority sites are ranking good, even though I have better content, unique. So what should websites do that have no or less authority? So in general, these are quality updates that we make across the board. And we call them core updates, because they are in the core of our ranking systems, where we try to figure out which of these pages are most relevant to specific queries that people are making. And on the one hand, technical aspects are always useful to take into account. So if you're seeing things like technical issues on your website or the internal linking, not being optimal, which we talked about before as well, then those technical things are definitely things I would clean up. On the other hand, if the overall view of your website is such that it's hard to tell that this is actually an expert on this specific topic, then that might be something that could also play a role. So that's definitely also something to look into. And obviously, for new websites, you won't be an expert from day one. And that's, I think, kind of normal and kind of expected as well. And there's no magic trick to appear to be an expert in Google's eyes, other than to really work to make sure that you have more expert content, essentially, it's clear that you kind of know what you're talking about, that you're an authority in this topic, that you can show to users when they come to your website that it's not some random blogger who like researched this topic for five minutes and then wrote an article, but actually you're someone who knows what they're talking about and people should take your content seriously. A lot of that kind of comes back to some of the earlier posts we did on the Webmaster blog, specifically when we launched the Panda update, where we talked a bit about what do we think is a good, high quality website. I kind of look into that, and I'd also look into the Quality Raiders guide that we also published. I think in the meantime, a couple versions already to kind of see where we think that our quality raiders tend to look when they review results in the search results. And the thing to keep in mind is our quality raiders are not going off one-to-one and looking at websites and saying that this is good, this is bad, and demoting or changing the rankings of these websites, but they're helping us to fine tune our algorithm. So generally what happens is our engineering teams put together a bunch of updates for our algorithms and they test these updates. And they test them with the quality raiders and that they give them a bunch of results. And they say, like, this is with one algorithm, this is a slightly different version. And which of these results are the better ones? And based on that, we try to fine tune our algorithms. And they take into account all kinds of things. So I don't know. From that point of view, I'd really take a look at the Quality Raiders guide and see what we're looking out for and think about ways that you can make it clear to your users, essentially, how your website is relevant for the issues that they come to your website for. A question about hreflang and people also ask boxes. I noticed Google doesn't necessarily pay much attention to hreflang when showing a site where an answer is extracted from. And that if you click, seems like a complicated question. So for example, if you search for Bondi Sands in Australia, you get the 10 blue links showing Australian sites due to hreflang. But if you click and expand a question and the people also ask box, it shows your results from an FAQ page that's attached to a UK site. This is despite the hreflang tags being set up correctly. So kind of wondering if Google doesn't respect the hreflang in these cases. I don't know. That's a good question. So I'd have to take a look to see what exactly is happening there. In general, we should take hreflang into account for those kind of cases. But there might be some edge cases that you're seeing where maybe we're not picking that up properly, or we don't have the hreflang set for that particular page. Because hreflang is kind of tricky in that it's always between individual pages. And we have to be able to, we have to have those pages indexed as canonicals individually. So it needs to be clear to us that this Australian page has a hreflang version for the UK, and they're exactly equivalent. We can swap out the URLs as appropriate. And sometimes that's hard to get set up properly. So it's hard to say offhand if this is something on our side, or we're just not recognizing or not using the hreflang link here, or if it's more something like we're not able to understand this hreflang connection between those sets of pages that you're looking at explicitly there. But I'll definitely try this query out and see if I can reproduce that. Otherwise, I'll try to ping you somewhere to get more information. When will the next Hangout in Germany be? I don't know. I skipped, I think, one because of a conference in Germany and skipping this week because of the holidays. So hopefully in the next batch, maybe I should do one in between to make sure that we don't lose track of the German folks completely. Is it better to leave redirect hoops in place when URL names are changed like this? Your redirect hops. Page one, page HTML goes to pages HTML goes to new pages. HTML goes to newest page.html. Or should it always just point to the final URL? In general, it's best to go directly to the final URL if that's at all possible. One of the reasons here is that it's normal for sites to make changes over the years. And if you've collected multiple of these changes over time, then we have to follow through this kind of connection of links with redirects to reach the actual URL, which means every time we need to request a new URL and if we request if we have too many steps in a redirect chain, then what tends to happen is we'll look at the rest of the chain in the next cycle when we look at that page, which would probably be like a day later. So I think the limit is five redirect hops that we follow immediately, and then the next hop we would follow in the next crawl cycle, essentially. So ideally, if at all possible, I redirect to the final destination. If you can't do that, then you can't do that. Sometimes you collect cruft, and it kind of stays around. In general, if these are changes that happen over multiple years, then over time, we will probably have noticed your new URLs anyway, and we probably won't be crawling those kind of multiple generations back URLs anymore. So if we're not crawling them that frequently anymore, then generally speaking, it's not so critical to keep that redirect. People often ask when they should stop redirecting something. And from our point of view, that's something that's kind of up to you. I tend to suggest keeping redirects at least in place for a year so that we definitely have a chance to see those redirects multiple times over the course of the year and follow to the next hop. So if this is really something that's like multiple generations back, and like 10 years ago, this URL redirected here, and then a year later redirected somewhere else, then probably those ancient redirects are not really that relevant or critical anymore to maintain. I'm building a CMS. If a visitor visits a page and it redirects from page to pages, that would usually be done to a 301 redirect. In the case when a user visits a page like page with a question mark UTM, then I redirected pages first, and then add the dynamic heading again. So I think this is kind of similar to the previous question where you have a redirect chain. In general, again, I try to redirect as quickly as possible to the final destination, so kind of that first part. Then the question of 301 or 302 for these individual steps. From our point of view, we try to differentiate between redirects that are permanent and redirects that are temporary. Whereas redirects that are permanent, we'll try to index the final destination URL. So that's kind of a sign telling us the canonical should be the destination. And temporary redirects are ones where we tend to keep as a canonical the initial URL, kind of the redirecting URL. So those are kind of the two situations that we look at there. A complicating factor there is that if a temporary redirect stays in place for a longer period of time, then it's not really a temporary redirect anymore, then it's really more like a permanent redirect and we'll treat it as such. So if you have multiple redirect steps and some of them are using 301 and some of them are using 302, then you're kind of giving us mixed signals where we see some permanent, some temporary redirects. And essentially what you're telling us is you don't know exactly which of these redirects or which of these URLs you want to keep as a final destination or as an indexed version. And that means we'll generally tend to defer back to other factors that we look at for canonicalization. So for canonicalization, we'd also use the real canonical on these pages. We'd use internal external linking. We'd use a sitemap file. We'd use hreflang links. We'd prefer HTTPS over HTTP URLs. All of these things kind of add up. So I suspect what would be happening here is if you have multiple steps, some with 301, some with 302, we tend to look at the overall picture, like which of these URLs, like the initial one or the redirection target, is the one that we think most likely would be your preferred canonical and we'll try to pick that one. In practice, it's a lot more complicated than, like, you really need to worry about. Because from our point of view, it's more a matter of which URL we show in the search results. It's not that these URLs will be ranking any differently. It's not that page rank will be flowing any special way. Something that has a 302 redirect, we try to pick the redirecting URL. Something that has 301 redirect, we try to pick the destination URL. And both of those URLs have the same signals. They have the same links kind of going to those pages. And we could rank them exactly the same way. So there's a lot of complicated steps in between with regards to, like, which of these do we pick? But in the end, it's just, like, do we take this one or do you take that one? And in the search results, you'll be shown in exactly the same way. So from my point of view, you might look at this and say, well, this is way too complicated. I'll just, like, Google pick a canonical for me. And most of the time, that would be perfectly fine. If you don't have any preference with regards to which URL is shown, then we'll pick one and we'll rank it in the same way we would otherwise the one that you prefer to have shown. Obviously, with SEO, you try to do things exactly the way that you want. And if you do have a preference, then make it as clear as possible. Are you aware of a sitemap bug in which Google doesn't crawl the sitemaps of a particular website correctly or at all? New sitemaps couldn't fetch error and the existing sitemaps are not crawled. I've seen this reported off and on, but in general, the cases that I've looked at tend to be very specific. And it's not that there's an overall issue with regards to sitemaps. So my feeling is that it would probably be ideal just to report this in a forum and to let us know about the sitemap files in particular, where you're seeing this problem. So I noticed you also submitted a second question, which is very similar where you say we have over 20 properties, but only two of them are seeing this problem. So I would, like I said, start a thread in the forum and explicitly mention those two properties and the sitemap files so that folks there can take a look to see if there's something specific happening with those sitemap files or if there's something that needs to be escalated to Google to kind of look at those particular sitemap files. And if you already have a forum thread where you have all of these details, then drop a note in the comments there. And I can jump in there to see what exactly is happening. We had our old website redesigned to be modern and up-to-date with current standards. The website was launched two months ago. We saw a large drop in organic visits. The new website is redirected to a domain that we own. The content is mostly original. A few posts and news that we publish are not original content. Our link to the publishers. We don't spam or use any ethical techniques. What could be happening here? I don't know. So I haven't taken a look at this particular website. It's really hard to say offhand. In general, the thing to keep in mind is if you're doing a redesign, then we will take the final version of the content into account and use that for indexing and ranking. So a redesign from our side doesn't necessarily mean that everything will remain exactly the same in search. We will take the new content into account. We will take the new setup of the website into account, all of the internal linking, the way that you're providing the content, all of that is taken into account. A lot of people do redesigns for SEO reasons, and that's kind of like we need to be able to take into account the current version. So just because a website used to be good and did a redesign doesn't necessarily mean that the redesigned version will be treated just the same as the initial version was. Oftentimes, a redesign will look completely different. Often, it'll have completely different internal linking. The HTML on these pages might be completely different. It might be using different technologies where maybe the old one was a static HTML version and the new one is using a JavaScript site framework, maybe in a way that is not optimized for Google. So all of these things can come together here. You also mentioned that you've also redirected to a new domain. So that's kind of a site move on top of the redesign, which also adds a bit of complexity. And all of these things can have subtle effects, where if you don't do individual things correctly, then you could be seeing effects of that. So for example, if you don't do a site move in a way that you're redirecting URLs on a one by one basis, but rather redirect all of the old site to the home page of the new site, or you might not even have equivalent URLs for each of the old URLs, then that can have a really severe effect on the website in search. And similarly, if you redesign in a way that we can't crawl or index the content anymore, then that also will have a pretty severe effect in search. But all of these things are usually technical things that can be picked up on, they can be improved. If you're seeing that you're not able to figure out what exactly was happening there, then I would strongly recommend going to the Webmaster Help Forum and posting what you posted here with the details so that folks there can take a look to see if there's anything technical or anything kind of basic that you could be doing differently. I wouldn't assume that there's anything from our side where I'd say just because there was a redesign done, we will treat this website worse. We do try to take redesigns into account. Sometimes they take a while to get reprocessed, but a redesign can result in a website doing just as well, or even much better than the initial version. A rich snippet is only showing the posts that are on the first page and not on the other 45 pages of a thread. Can you suggest in making sure that the rich snippet is shown on the correct amount of posts? All 45 pages are shown as indexed. It's really hard to say which direction you're asking here. Usually, like I mentioned, the structured data should be specific to that particular page that you're on. So if you're showing, including the same structured data on all 45 pages of a thread, then that seems like something where it would be correct from our side to just show it for one. But I'm not exactly sure how you mean this. Maybe these are different posts, blog posts, or different pieces of content that you have. I'd also perhaps go to the Webmaster Help Forum and double check with the folks there, specifically giving your URLs and showing which queries you're looking at to show the effect that you're seeing there. Hi, John. May I just interrupt you in between? I tried to join the call earlier, but I had difficulties joining, so that's why I'm interrupting. The question I have is with Regards to South Africa, where I have a website which has, over the course of the last six months, received a lot of backlinks from websites which looks like they are very spammy nature and also where the content of my site has been replicated on other domains and being used as doorway pages into other thin affiliate websites. And I am concerned about what's been going on. And so the question to you is where to turn to with regards to this problem, because I see that as a problem also due to the fact that in the SERP in South Africa, the quality of the search results have altered and there's a lot of spam now taking over the SERP. So therefore it is a concern of mine and I'm wondering how to tackle this. Usually just seeing a bunch of spammy links to a site shouldn't cause any problems. So we have a lot of algorithms that try to recognize the kind of normal spammy linking behavior that happens on the web and we essentially just ignore that. So that's something where if you're just seeing these spammy links in Search Console or in any other link checking tools, then I wouldn't necessarily assume that that's a cause for a problem. If you're seeing that these are picked up by the website team and there's a manual action taken against your website because of spammy links to your site, then that's I think a different situation but that's also one where you would explicitly get a notification for that problem. But in general, just because there are spammy links to sites, I wouldn't find that to be too problematic. We like these kind of spammy links and these kind of spammy sites that scrape other people's sites and they link to random other sites. They've been around since almost forever and we have a lot of practice dealing with them. So you might find them in your reports but I wouldn't necessarily assume that they're causing problems. What I would do though is if you're explicitly seeing that the search results overall are problematic, especially with more kind of generic queries, so not like you're looking for this long string of text but you're looking for some generic product, for example, then that seems like the kind of thing that would be really useful for our team. So on the one hand, you can use the feedback feature in the search results. If you're seeing this as something that is kind of broader in nature and that you're seeing that for a lot of queries, then I'd love to take a look at that and pass that on to the teams here directly. So if you want, that's something you could send me directly. Otherwise, if these are just really individual queries where you're seeing this, then that seems like something that the feedback tool in the search results would be useful for. Thank you. I will send you after the call. Cool, fantastic. All right, maybe I'll just take more questions from you all if there is anything left on your side that we should... I have a question actually. Okay. Sometimes ago, I installed a plugin like one signal and they used to send a push notification whenever I publish an article, but the problem with that plugin is that the entire directory has been indexed in the Google, like WB content slash plugin and all these things like PHP file, JavaScript. But after that, I de-indexed after adding the one line in HD access, but right now it is still indexed in Google. So it is going to affect my ranking or not. That shouldn't be a problem because those URLs shouldn't be ranking for anything specific for your website. If you want to remove them faster, what you can do is use the URL removal tool in Search Console. So if you have the website verified, you can say everything in this folder should be removed from the search results. And what will happen is we'll stop showing them in the search results and as we can reprocess them, that they'll also be dropped from our index directly. So that's kind of, I think the direction I would take there in general though, just because something is indexed doesn't mean that it has any effect on your website. Okay, one more question. I have heard a lot that 301 redirects is like a novice family thing and Google considered it as a bad thing. But I want to know, actually I have two sites and they are typically on the same related techniques and you know that niche is too big. So can I redirect both the sites and merge them? Sure. And what is the best way to do that? You can... So this is something where a 301 redirect would be the correct approach. So I wouldn't say that... One more question. Does 301 relate past all the juices or it is not a good thing at all? It passes the signal. So what happens there is we understand that these URLs, so you would need to do that on a per page basis. We understand that these URLs are the same and we will try to pick the destination URL as a one to show. So essentially it's passing all of the signals. Okay, thanks. I have a related question. Okay. We made a domain transfer about two months ago. Page per page, 301s are implemented. Well, it's a blanket transfer because, you know, from HDXS and we lost half of our impressions and half of our organic traffic. And this had never happened to me before. It looks like I did nothing wrong and we can't recover from it even two months later. Oh, I see. You also added the URL in the chat. I don't know. I probably have to take a look to see what exactly is happening there. But in general, if you've set up the redirect properly, then that's something that should be working pretty quickly. The one thing I would watch out for is that you're not removing the old site or blocking it with robot text or anything because sometimes we'll have a mix of URLs indexed and if you're completely removing the old site from search, then those URLs would not be seeing any traffic. The other thing that I've also sometimes seen is when a site moves to a domain name that was problematic in the past, then essentially you're kind of moving into, I don't know, a bad house that still needs more time to be kind of rebuilt in a stronger structure. So if you see things in Search Console and the manual action section where you notice that, oh, there's like lots of spammy stuff that was happening here, or if you go into the archive.org website where you look at the older versions of that domain name, if you go in there and you see that actually this website used to be used for really spammy things, then that's something where you kind of need to at least understand, well, you've moved your good website into this new domain that still has a bit of a spammy history associated with it. And usually what happens there is, over time, as we recognize that your new website is really something completely new that is housed on this domain name, then we'd be able to reflect that in a search results and it'll settle down normally again. If you see things like really problematic links to that domain name from earlier owners, then that would be something where I might go ahead and use a disavow links tool to kind of clean out the biggest problematic links that are going to this site. So I don't know if any of this is appropriate for your particular case or if there's a completely new domain name, but that's kind of the direction I would take for a situation like this. The other thing- We know it's a new domain name, so. OK. The other thing is also that we have made various quality updates over time as well. So it might be that your move there, essentially just time-wise, hidden in time where there were actually also quality changes that were taking place. So depending on the type of a website that you have there, that might be something where you'd want to review some of the general quality guidelines that we have, those kind of things. Looking at your website now, it looks like it's more like a small business website, I guess. So that feels like something where that probably shouldn't be taking place. But I'll also take a look at that afterwards to see if I can pull out anything specific there. Thank you. Question with regards to, you just on one hand said that no website might be as moving into a bad house and that you should use disallow tool to disallow links if you have bad links pointing to it. In my previous question, you said additionally that Google is really good at detecting bad links and ignoring them, like nullifying the value. So what are good for that? Which way should you- how should you think? Because moving into bad house versus nullifying values to separate things. Yeah. Wow, I think that's a topic where I probably need to spend a lot more time on seeing how we're almost at the end of the session. So I prefer to save that for the next one. Do you have time? What is it on Thursday? Maybe to join in as well? Yes, sure. Same time or? I think it's in the morning. OK, that should be fine. OK, cool. And maybe drop that question into the questions for the next Hangout as well so that I don't forget. Because I don't know, people ask this all the time. If I run through and just give a five second answer, then I don't think that's very useful for either of you. I understand. Cool. All right. So with that, I need to run. Thank you all for joining. I hope to see you all again maybe on Thursday, maybe in the next round. And I wish you a great week in the meantime. Bye, everyone. Bye. Bye, bye, bye, bye. Bye.