 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst at Google here in Switzerland. And part of what we do are these webmaster hangouts for webmasters, SEOs, publishers of all kinds with regards to questions around Google, Google Search. Bunch of questions were submitted already. We can go through some of those. But I see some new faces here. So if any of you want to introduce yourself or get started with a question, feel free to jump on in. Hey, John, I'd be happy to go ahead. Sure, go for it. Can you answer questions this morning about Google My Business and Google Maps? Not really. From our side, those are completely separate areas. So I don't really have much insight into that. Sorry. What about that? I thought so, but I thought I'd give it a shot. OK. They have a pretty good help for them. So I'd go there and check with them there. All right, thank you. They don't do any office hangouts for Google My Business, do they? I think they've done a few, but probably not regularly. Everyone wants Google My Business. OK. Oh, man. Google SEO's big, too. Yeah, that's good. Oh, yeah. Any other questions to get started? Yeah, hey, John. My name is Pete. I was on a hangout with you about a month ago. I brought up an issue with some cross-canonical and caching issues going on with the website platform that I work with. And it seems to be getting a little worse. I'm starting to see some mixed data show up in breadcrumb results in mobile, actually, now. Like live searches, not just site colon searches. This is basically where we have different websites being the canonical version of a page for different local businesses. So I just wanted to follow up with you and see if you had mentioned that maybe the team could take a look at it and see if there was something that was happening on your end. So I wanted to just follow up with that and then see if there's anything we could do. Yeah. This was, I think, the card dealerships? Yeah, correct. Yeah. I'm not aware of anything specific that happened there. But did you end up sending me an email with some of the details? I didn't send you an email, but I'd be happy to do that, for sure. OK, that might be a good idea. Because I can take a look at that with the team here. Let me just drop my email address into the chat. So I'm happy to take a look at that with the team here. I don't know if they already took a look at that based on previous escalation. I was out on vacation, so everything from before is a blur. For sure. OK, great. I'll definitely send an email with details, and hopefully we can get something resolved here. Cool. I think the general problem that we sometimes run into with regards to these kind of setups, where I believe you have a common CMS or a common backend, and you create local sites for people individually, one of the things we've run across there with some of those setups is that some of the URLs work across different domains. So for instance, you have, I don't know if this is the case in your situation, but for instance, you might have an ID associated with a certain page, and you could use that ID on any of the domains, and you would be able to pull up that same page or the same content. And what would happen on our side? That shouldn't be happening from our side. No. OK. The URL patterns are definitely the same, like the inventory pages have the same path, but you can't generate a different dealer's content on another dealer's site with any type of ID or anything like that. OK. OK. So I'll take a look and see if I can spot anything there if we can clean that up with the team here. Cool. Great. Thank you. Sure. Anything else before we get started? Yeah, go for it. I had a delay. I submitted a site to that too. Oh, man, I can hardly hear you. It's like breaking up. No point. Oh, man. Maybe you can type the question in the chat, or I can look out to see if you submitted it on the Google Plus page, because it's a barely heard or anything there. All right. Let me get started on the other questions. And if you can type the question in the chat or try later, that would be great. Cool. All right. In Search Console, we have a number of sitem files for different categories, which don't actually have any URLs in them. A little bit of background noise. So we have a number of sitem files without any URLs in them. This is a problem. From our point of view, that's not a problem, but it's also not great, because you're submitting things that basically don't have anything useful in them. So I wouldn't worry too much about it, but I think the next time you're going through the code that's generating your sitem files, it might be useful to just drop the sitem files that you don't actually need. We sponsor a local hockey club, and they put up an image link in their site thanking us, but it's do follow. And we don't know what to do to change it. Is this going to cause us any problems? So in general, when the Web Spam Team takes a look at these kind of questions, they try to look at the bigger picture. And if with your website, you're sponsoring lots of different clubs and sites where it looks like the primary intent is to get a link there, then that's something the Web Spam Team might take action on. On the other hand, if this is one of lots of links to your sites, and this is one from a local hockey club, and it's like you just know them, and they can't easily fix that or change that on their site, or they don't know how, then the Web Spam Team is not going to worry about that. So I try to take a look at the bigger picture there, and think about whether or not this is really something that you're doing systematically, like going out and sponsoring other sites or products with the intent of getting a link, or if this is something that's essentially just a natural part of the Web. So my guess based on the question here is that this is not something you need to worry about. We were initially hit with a 30% drop in rankings just after the 1st of August, which I understand was a medic update, though they were not medical related, just an e-commerce site. We're starting to see a shift in rankings improving now. Could this be related to the update, and things go up and down? Or could it be something else? So the update we launched, I think around 1st of August, was more of a general ranking update like we always do. So it's not specific to medical sites. It's something that could affect any website out there. And in general, it's not something where we'd have any specific guidance to say, well, this is what you need to change. This is really just part of the normal changes that we make on the Web overall, as they always have it. It's not a sign that your site is bad, or the site that went up is better or good. It's just that these things change over time. So with regards to changes afterwards, that sounds like just normal algorithmic changes as they can always happen. If you were an affiliated business and displayed the address of the business you were partnered with, along with their five branches, in order to help your customers and also to help them try to rank locally, how would Google see this? As these are technically not your addresses, but you want to offer the customer the best user experience and service possible on your website while still trying to be able to rank and be competitive. How would Google treat you, and how should one deal with addresses if you're an affiliate? So I don't quite understand the connection that you have there with these other businesses. But in general, putting an address of a business that you're pointing out, or you're talking about, or you're writing about, is perfectly fine. If there is any kind of a business relationship with them, that doesn't really matter. That's not something where you'd need to do something special on the address that you put on your pages. So from that point of view, I suspect you're overthinking it. And essentially, think about what makes sense for your users, the kind of content you want to provide for your users. And if that includes other business addresses that you think might be useful or might be relevant to the content you provide, then go for it. That's not something where I'd want to hold you back. I've seen a technique whereby some sites are sharing the same blocks of content over multiple pages, but swapping the blocks around very frequently so that Google is recalling a different combination every single time. However, as far as I can remember, you have ranked all of these pages in the top three positions, so competitors cannot get a look in. What are your thoughts around this technique? This sounds, to me, like unnecessary busy work, in the sense that our algorithms are pretty good at figuring out where we should rank things in an irrelevant way. And just by shuffling things around and artificially making it look like these pages have changed, I don't think you're doing anything useful for a website like that. So if you're seeing your competitors do this, it's like let them keep doing this unnecessary busy work while you work on actually improving your website overall rather than just artificially shuffling pieces of text around. So that's kind of my take on this. Hey, John, maybe I can ask another question over that you can answer. Sure. There has been a video that, oh, well, there's been a lot of tests done around what Google can crawl with JavaScript. And I know that you guys have done a lot with what you can call with JavaScript. But I think there was a hangout a while back where someone had inserted a canonical tag using JavaScript. And I think you had some concern over whether you guys would see that or not. So we have, I guess my question is implementing hreflang and then modifying a canonical tag using JavaScript, do you see that being an issue or something that Google would recognize? That could work. So I think the tricky part with using JavaScript to change something like a canonical or robots meta tag is that it can significantly change the meaning of the signal that you're providing to us. And with JavaScript, the problem is in the first step, we crawl and index the static HTML version. And then in the second step, we render it and process the JavaScript version. So if you're providing one signal in the static version and you're providing a completely different signal in the rendered version, then that can kind of provide some fluctuation as we kind of switch between the static version and the rendered version and as updates come. So that's something where what I might do in a case like that is just not provide a canonical in the static HTML version and then add one with JavaScript so that the signal is clear, like this is the only signal that we have with regards to the rel canonical, rather than you have like it's like set up like this and then it switches over to something like this later on with JavaScript. So that's kind of a good answer. Start with the static HTML, no canonical, no hreflang, and then use JavaScript to populate both better than starting with something than swapping. Exactly. Excellent, thank you. Let's see, in the chat, there is a question. I heard about a Google update on August 22nd. Can you confirm that information? Can you suggest something about this update? I don't know about any specific update from August 22nd. So that's kind of tricky. I don't know what really to say there. In general, we make updates all the time. So sometimes there are small updates that are launched with regards to search and maybe they just affect a very small number of sites, but maybe one of those sites is your site. So maybe that's something that you're seeing there. But these are essentially just normal sites, normal changes as we see them regularly. Let's see. We went to HTTPS after August 9th. After the August 1 update rolled out, since HTTPS rolled out, organic traffic and impressions have been continued to trend down since about August 10th or so. The site was also hit big by the August 1 update. Could going HTTPS so soon after an update cause more fluctuations or more loss of rankings? No. Essentially, these are completely independent. The algorithm changes that we roll out there. You can make changes on your website at any time. So that's completely independent of moves like HTTPS. One thing to also keep in mind with HTTPS in general is that the ranking effect is very subtle and very small. So it's not something where you could say, well, my site dropped in ranking by 30%, 40%, and I'll add HTTPS and I'll get back to the same position. It's more of a very subtle change that happens with regards to HTTPS, where primarily the effect that you would see is if we have two different pages that are essentially equivalent, then we'll tend to pick the HTTPS version. So that's something where I wouldn't expect the move to HTTPS to compensate bigger algorithmic changes that you've been seeing on your website. Also, I wouldn't expect the move to HTTPS to cause more problems just because there was a recent algorithmic change. So this is something you could do completely independently. The one thing around HTTPS migrations that we've seen where sites sometimes struggle is when migrations are done inconsistently or in a way that isn't quite aligned with what we documented, where you really have a move of the complete website one-to-one from HTTPS to HTTPS with clear 301 redirects. Then our algorithms can sometimes see this move and recognize, well, it's not exactly a clear site migration, so we have to be more kind of critical and look into the actual URLs that are being moved around so that we can make sure we're doing the right thing. And if that's the case, then generally these migrations take a little bit longer, and you will see some fluctuations. So for example, if you remove a lot of URLs while migrating to HTTPS, if you block by robots text a lot of URLs with the HTTPS migration, then all of these things make it look like the new site on HTTPS isn't really exactly the same as the old site, so we have to be more cautious when we try to analyze which signals get forwarded. I've seen conflicting information online about image hotlinking. Does Google view websites that hotlink to our images similarly to a website linking to pages URL? No. We don't treat an embedded image as a link to a website. It's essentially just an embedded image. And the question goes on. Does surveying hotlinked images from a third party CDN negate that effect or change anything there? Again, we don't see these as links to a website, so where you host those images is essentially totally up to you. Using a third party CDN to host your images is generally a good idea, because they're often optimized for surveying static content in a very fast way, so that's something that generally makes a lot of sense. Let's see. A question from a site. I think we looked at this in the last hangout as well with regards to a site that has seen some drops in traffic for the last couple of years. In general, if you're seeing changes over a longer period of time, then that's something where it's not a matter so much of something technical or one algorithm that's looking at your site differently. It's essentially a change that from our point of view has just been ongoing for quite some time, and it's more of a shift, I'd say, I could say in the ecosystem in general, where maybe users are looking for slightly different content. It goes on the question with regards to kind of the wrong content ranking in that you write about something completely new and suddenly other sites that are referring to your content are ranking instead of your content. For things like that, I'd love to see specific examples. So if you run across something like that with your website and especially if you see this happening from time to time again, I'd love to have examples of queries and URLs where you're actually seeing this happening. We generally try to make sure that we show the right results in the most relevant results in the search results. So if we're not showing your content when your content is kind of the authority on that topic, then that's something that we need to fix. Why does Google Organic Index and Google News extract dates differently from our news articles? I don't know. I'd have to take a look at the details there with regards to the dates. One thing that we recently updated in our guidelines or in the structured data information that we have is that we recommend using dates in the structured data directly. By doing that in the structured data, we can have a machine readable date that we can pick up. And it's a lot easier for us to pick out and pull out the correct date for an individual piece of article. So that's something where I double check. I believe it's the article markup that we have in the developer's center where we've added that dates as a recommended format. So if you're not providing your dates in with structured data and you want to make sure that Google can pick up those dates directly, then I'd recommend checking that out. But dates in general can sometimes be tricky. So it's something where if you do see regularly that we're picking up the wrong date from an article, then that's definitely also worth sending our way. I am discovering a large number of page one rankings for a number of keywords and niches, namely SEOs and attorneys that are using highly manipulative methods to get backlinks and link juice. What is Google doing to deter this nowadays? Yeah, let's see. It's a long question. I guess there are multiple aspects there that come into play. On the one hand, we do try to ignore links that are problematic to a website that we can recognize. This is done algorithmically. And it's also something that the web spam team may be doing manually as well, where they might be taking manual action on some of these links to essentially block them from passing any page. And that's not something that someone external would be able to see. So if there's a manual action, then we send that to the site that it's affected. We don't post that publicly. We don't highlight that in the search results. And what can happen in some of these cases is that the sites rank despite all of these issues that they're causing with regards to the links. I recently looked into a similar case that was sent my way, where someone was saying, well, this SEO company is coming back through their links. They were able to change this other site's rankings. And looking into the situation, it seems like the site was always ranking like that for a really long time. And essentially, we're ignoring all of the work that this SEO company was doing by getting hundreds of thousands of links to those pages. So just because your SEO tools or your link tools show that there are lots of links to a specific site doesn't mean that Google is being kind of swayed by all of those links. We might be ignoring them completely. They might even be having a negative effect on the site. And the site might just be ranking despite all of these issues. So that's always something to kind of keep in mind. That said, if you do run across situations where you're seeing sites with lots of problems attached to it with regards to our web spam guidelines, then by all means submit a web spam report to us. If you're seeing bigger systematic issues with regards to some sites or a bigger group of sites, then you can also send those directly to me or to someone else on our side. And we can pass that on to the web spam team to kind of take a look at the bigger issue rather than just the issues associated with one specific site or URL. What are reasons for completely different positions for keywords and singular and plural? We have some examples where the plural ranks in top five and the singular isn't in the top 100. The search intent should be very similar. So I guess depending on what is happening there, we might be seeing these as something completely different. So just because one is singular and one is plural doesn't mean that we would show the same search results for those kind of queries. It's very possible that we see these as completely different elements and maybe even as completely different intent from the user side. So that's something where I wouldn't necessarily assume that Google will always treat singular and plural words as complete synonyms and say, well, we'll show exactly the same search results for those two versions. And as a site owner, it might also be worth thinking about what users might be searching differently. And is my site really the most relevant one for people who are searching for a singular version of this word versus those searching for plural version? I'm working on a project with multiple international subfolders and hreflang tags set up. A significant portion of the content in these international subfolders is only partially translated. So menu and footers translated the body copies in English. There seems to be some mixed signals on the subject of hreflang tags set up for partially translated content. Wondering if you have any recommendation on how to handle translated content with hreflang tags. So in general, with the hreflang tag, we try to show the best matching version for users in this language and region that they're searching for. And if the best match version on your website is one that is partially translated, then so be it. That's your best match. So that's something that could use hreflang for that as well. Obviously, a full translation would be better. But if the best matching version is the one that you have where it's partially translated, then that's still a better match than a version that has nothing in that language. So from that point of view, that should be pretty much OK. I'll double-check the documentation there to see if there's anything specific we need to change that. But in general, that's kind of the approach. The other thing to keep in mind is that if you have partially translated content, that usually that means we recognize that there are multiple languages on the page. And then it's a bit harder for us to understand what is actually the primary language for that page. So that might be something to keep in mind there as well with regards to do you really want to provide this partially translated content on your site or is it a better user experience if you just have it like the primary version? That's a little bit more visible. Let's see. As per new domain extensions, there's a link with the anchor abc.xyz seen as the same as just, I guess, the text abc.xyz. We do try to recognize URLs in links when they come as anchors. And we treat them more as URLs rather than as keywords. So just because you have those keywords in your domain name doesn't necessarily mean that links to your site that have the URL just visible are suddenly kind of like keyword-rich anchor text to your site. So I wouldn't necessarily assume that we would automatically pick that up. Also, one thing I've noticed just kind of, I guess, on the side is that with a lot of these new top-level domains, some of the software that's out there that's trying to recognize URLs when they're added in text and convert them into links seems to have trouble. That's something that we run into from time to time as well in the sense that maybe, I don't know, I think xyz is a top-level domain. So if you post something like abc.xyz into a generic forum, it might be that the forum software doesn't actually recognize that this is supposed to be a link and turns that into a link. So that's something that can also happen. If we try to link to every page on our website from the home page, does this dilute the focus? So it does dilute it a little bit in the sense that we don't understand the structure of your website that cleanly indicates like that. With smaller websites, I think it's completely natural that you would be linking across all of the different pages on your website. If you have, I don't know, 10, 20 pages on your website, then it's kind of normal that you can get to all of those pages from the home page. But if you have a larger website and you link to all pages from your home page, then we lose kind of the semantic structure of the website. So we kind of lose the understanding of categories and higher-level pages, lower-level pages, and where these pages fit into the structure of the bigger website. So in general, I would recommend trying to stick to a clean structure with regards to your linking so that users and Googlebot can try to better understand the structure rather than just like, here's a collection of 100, 200, 300 different links that you can click on. So that would be my recommendation there. It's not so much that you would, I guess, see a big effect in search. But depending on the website, you could certainly see that we understand the lower-level pages a little bit better if we understand like, this product is in this category and this kind of higher-level category and those kind of things. We're facing a huge drop in traffic after an update. Please let us know what the issue is on our site. We've been working for the past three years without any drop in traffic. Again, I think this refers to the recent updates that we did in search, and those are just normal updates as we always do them in search. We try to figure out which sites are more relevant, which sites users expect to see for different kinds of queries, and we try to reflect that with our algorithm. So it's not necessarily a case that your site is worse or not as good as it used to be. It's just the web has moved on. If you're saying you've used this site for the last three years and nothing has changed, then maybe it's time to rethink what you're providing and does that still match the current expectations of users? What are some of the biggest SEO myths prevalent today? Wow, good question. I don't know offhand what I could mention there. I think in general, at least from a technical SEO point of view, a lot of things have gotten a lot better in the last couple of years in the sense that there's a lot more understanding of how search works, how crawling and indexing works, how rendering works, how everything around JavaScript works, and that kind of helps to prevent a lot of these things. Because if you understand the technical background behind some of the efforts that are needed to run a search engine, then it all becomes a little bit more logical with regards to what makes sense and what doesn't make sense. Whereas when there's not that much technical understanding about the background, then it's a lot easier to say, well, putting this meta tag here causes this change on my website. I don't know why, but I think it happened twice. So maybe this is something that always happens. Whereas if you have the technical understanding of what this meta tag does and how Google crawls and indexes content, then a lot of that just falls away. Because it's a lot more logical to understand what the connection is or if there could be a connection if it makes sense to even have a connection there. So with that in mind, I think a lot of the myths have kind of become less prevalent nowadays than they used to be in the past. How does Google measure direct traffic to a website? With regards to search, I don't think we measure that at all. So that's not something that we look at. I assume direct traffic is people just typing an address into the browser or clicking some random link on another website. That's not something that we take into account at all for search. So we don't really measure that there. I imagine if you're using Google Analytics, then that's something you'd see there more. If pages in the site map, if I have pages in a site map and these are not linked from the home page, is that a problem? If they're not linked from the home page, is absolutely not a problem. If they're not linked from the website at all, then we might not necessarily know what to do with those pages. So that's one thing to keep in mind. I'd really recommend using a site map as a way to kind of provide additional information about your website, but not as a way to replace all of the internal linking on a website. So we should really be able to crawl the whole website normally and find all of your content. If we find additional information in the site map file, like change dates, then that's something that gives us additional value, but one shouldn't replace the other. Kinesca, can you hear me? Yes. Back on that previous question, so combining the last two questions, so crux is not used. And I kind of tell people, because there's a lot of, I don't know what you call conspiracy theories, but I say crux is not used at all as a ranking factor. So I just assume that's kind of meant in the same thing as the direct traffic question. OK, wow. Lots of stuff in the chat here as well. What does Google consider to be duplicate content? Are there any penalties for duplicate content? So I guess one of the common questions that we get all the time. This is maybe, I guess, goes into kind of the SEO territory, I guess. So in general, what we look at content in several different ways, the most basic way to recognize duplicate content is when we see exactly the same content on different pages. So a really common example here is if you have exactly the same content on . . . and non . . . for your website, then that's duplicate content. That's not a problem for us, because we run into that all the time. And pretty much all websites have this kind of duplicate content. This is more of a technical issue for us. So we see exactly the same content on multiple URLs. That's trivial for us to recognize. And we just need to figure out which URL do we use to store this content. So that's not a matter of penalty or any kind of a problem. It's just we've seen this content on multiple URLs. Which one do we pick? That's more of a technical issue. The downside there, of course, is when we see the same content on multiple URLs, that means we've crawled multiple URLs and not gotten anything unique, not received anything new or interesting from these URLs. So we've essentially wasted a bit of time to actually get to your content. We've crawled a bunch of stuff, maybe, from your website. And only a small portion of that is actually something new and something that's worth indexing separately. So the downside there is we might be wasting time to actually find your new content, time that you could be using to kind of provide something new. If your website isn't providing anything new all the time, then probably you don't even care if Google is wasting time accessing all of these individual URLs because we already have the important content from your website. So that's not necessarily bad. The other kind of duplicate content that's really common is content within pages. So there are different variations of this. A really common one is that pages tend to have a common footer. So that's something where maybe across your whole website you have this bottom part here that contains just your addresses, your kind of links to your terms of service, all of this information. From our point of view, this is a block of text that's essentially duplicated across your site. And for us, this is also no problem. We see this block of text on these different pages. And if someone is searching for content on your website, we know the content is in a different part of the page. And we can focus on that. Whereas if someone is searching for something within this block of text, we know this block of text is across a number of pages on your website, we just have to pick the right one. So again, it's a matter of picking the right URL and showing that in the search results, when someone is searching for something that's duplicated across your website. So if someone is searching for your website's terms of service, then we're not going to show every page on your website that has a link to the terms of service. We'll try to figure out which of these pages is actually the most relevant one and show that. This is not just with regards to footers or kind of repeated elements across a page. It can also be with products, for example. If you have an e-commerce site and you sell a lot of similar products, then often the product description will also be very similar. And that's the same thing. If someone is searching for something within that kind of shared description, we'll try to figure out which one of these URLs is the right one and show that. If someone is searching for something outside of that shared block, then it'll be a lot easier for us to actually highlight that. So again, that's not a problem. The problem starts when we see that sites use other sites content in a regular way. So for instance, if a site always takes content from different news sites that come by maybe RSS feed and they republish that on their site and we don't see anything unique on the website itself, then from our point of view, we've crawled a bunch of stuff from this website. We've seen a lot of pages from this website, but there's nothing unique in a value on this website. And then it might happen that our algorithms say, well, actually, it doesn't make sense to spend too much time crawling and indexing this website because we just see the same content as was published on other sites already. So that's kind of the bad situation to be in. And similarly, the web spam team, when they see this, they might also take manual actions. They, well, our algorithms shouldn't even have to worry about this website. We should just be removing a completely different page. John, can I follow up on that question? So we have a case here where a spammy domain name is copying 900 pages from our website recently. And usually, in these cases, I sent a DMCA complaint and the hosting is taking them down. However, this case is showing that maybe there is a loophole which is abused. And the problem is that this domain name is hosted with root hostings, which are known to host malware and other kind of malicious stuff. And what it does, basically, this domain name hijack it more than 900 pages from our website. And the problem is that our traffic declined significantly. Using third party tools, I had found that this domain name is starting to rank for all of our terms. And the problem is that they're using clocking. So when a human tries to visit this website, he will see not found. But when the bot visits this website, he sees our content. And here is the tricky part. The cache, basically, when you click on the cache, view cache, Google is displaying our original domain name. So it's not displaying theirs domain name, but our domain name. And after researching about the topic, I have found that this is called 302 hijack tactic or something like that, which was used maybe 10 years ago. And supposedly, Google have to fix that. And right at the moment, I have used that tool to remove a lot of these URLs. But unfortunately, there are a lot of left there. And this domain name hijacked the content from two other websites. I have checked their traffic as well. And with third-party tools, I have seen that their traffic is significantly lower than before, just by the time they have started to hijack the content. And when you search the exact title, you are seeing this root domain ranking above ours, and even our URLs being disappeared from the search. So I have used a method to remove these URLs. I have posted on the webmaster's forum. I have tweeted Gary as well on Twitter about this case. And what worries me is that despite these URLs, some of these URLs being removed from the SERPs, they still hold some power. Because you said in some previous handout that even if you remove a URL from the SERPs by using some tool or something, it still holds some power. Is it possible that really someone is abusing Google in order to still contact like that? Because basically, the cache, which is shown there, is showing our original domain name, not this cloaking domain name. And can I maybe give you the exact sample by providing me some email or something like that? Sure. Yeah. Let me just drop my email address here. You can send me something. Thanks, Serge. I think there might be multiple things there. On the one hand, if they're cloaking in a way that they're showing a 404 to users and showing the content to Googlebot, then I don't quite understand what their motivation would be. So malicious. A real competitor using us because they do not have any other content at all. And it's specifically hosted on Bethost 6, which won't take the content off no matter what. Yeah. I mean, I'm happy to take a look at that. The other thing is, if you look at the cache page and it shows a different URL, that usually means we use that different URL as the canonical. So if the cache page from their content shows your URL, then we see your URL as the canonical. And your URL would be the one that should be shown in the ranking. But the problem is that even despite the fact that the canonical is pointing back to me, as modern WordPress plugins do, they always try to use the canonical back to the original source for cases like this. Despite that, for many of these URLs, Google was ranking that root domain name on top for our queries. And I have used third-party tools to check how the rankings of this domain are going. And they are all being high. And I've got some other questions regarding recent updates. I was preaching white hats being complied with the search guidelines for the past two years. But unfortunately, I've seen very bad results for some low-search volume queries, for example, queries which include countries. And I'm seeing a lot of websites with auto-generated content being ranking with domain name extensions, which are very weird, very new. And all the content is basically auto-generated. And here is the tricky question, because most recently, Black Haters are abusing the Google Translation IP, if you know about the case. Is it possible that the Google Bot is fooled by its own services because it uses artificial intelligence to translate the content and the translations are getting better and better? So when somebody creates hundreds of pages of auto-translated content, the Google Bot is fooled by this content and thinks that it's human-readable and that it's normal content? That could always be the case. I think that's something that, from our point of view, we can't exclude that possibility completely. I think the improvements that are happening with regards to automatically translated content, I think that's something that has both pros and cons in that it might be used by sites that are essentially spinning content, like in the way that you're describing. It could also be used by sites that are legitimately providing translations on a website. And they just start with the auto-translated version, and then they improve those translations over time. So that's something where I wouldn't necessarily say that using translated content like that would be completely problematic, but it's more a matter of the intent and of the bigger picture of what they're doing. If they're essentially just spinning content and hoping that it ranks, then that would be more of a problem for us. And as always, feel free to send me examples. I'm happy to take a look at these with the team. This is especially true for websites which are localized. For example, I'm here from Bulgaria, and when searching a query, I find a lot of auto-translated content. And you may say, well, that's because there's not much content in this language, but it's not the case. And that is especially true when I search for English queries, for example. I see some sites ranking on third or fourth positions, which are auto-translated to Bulgaria. And I believe that's true for many other countries as well. And I have another question. It's about the thing that you said in the last handout that the new update is about rewarding others not punishing websites. However, in some queries which I search, I find sites that are auto-generated again with keyword stuffing, with a lot of bad backlink profiles. And they are doing a really horrible job. And maybe the only possibility for them to be ranking on top is because all of the other websites were penalized for some reason. I do not see that as a rewarding algorithm, especially for more low search volumes of the keywords. And the other tricky part is that is it possible that Google is trying to recalculate the algorithm based on the bigger sites being ranked slower than the smaller sites? Because I'm seeing a lot of thin sites, 10 pages, 20 pages ranking on top, which are heavy back spamming profiles. And they're just doing blackout and all these past two years in which I'm preaching, be compared with the Google guidelines, great good content, et cetera. All of this is now back at my reputation. And people are asking me, but do you see what's ranking at the moment? Is it possible that Google just needs a little bit more time in order to recalculate these positions so these thin sites will no longer be there on top? I think that might be an option. I think it's also something where sometimes we just need more examples that we can pass on to your team. And sometimes what I've seen is also that in some regions or languages where maybe we don't have a lot of direct feedback, then sometimes things like this happen. And it's not, let's say, by design that we show worse content for queries like that. That's more something that we need to fix. So getting feedback on that is really useful. It's about English queries, not localized. I'm talking about English queries. And the other thing, people are still abusing the technique of three or zero redirecting, basically, all the domain names. And I'm seeing a lot of content being ranked on people which are buying domain names from the 90s. They are using tons of backlinks at them. 10 page websites are outranking a lot of people that are doing everything to be complied with the search guidelines. And for me personally, and I've seen many examples of this, the latest update did a worse job in the health niche than what was previously seen. And I'm not saying that because we're seeing the decline of traffic. I'm saying because really the results for the user as a user are not so well. I'm not sure if you do sometimes a bit of step back in order to re-evaluate what is being happening because the moment things are not looking very good, especially for little smaller keywords in volume. OK. Again, I'm really happy to have examples. If it sounds like you collected a lot of things already there, I'm really happy to have examples on these things with regards to expired domain names and things like that. That's something that the WebStream team is pretty much on top of. I know that they do take regular action on a lot of these cases where people buy expired domain names and they put up new content or they put up the old content just like hide some links in there or they redirect. These are all techniques that have been around for a while. And I know from talking with folks on the WebStream side, they are actively working on this problem as well. So that's something where I wouldn't say that this is like a simple loophole that you can just exploit. But there might be cases where people are getting through with spammy tricks. And that might be something that you're seeing a little bit more of because you're maybe active in an area where that's more visible. I don't know. But if you can send me examples, I'd more visible after the updates. More visible after the update. For example, there are people which were penalized. They told me after the update, we're seeing we are ranking on top. They just forget about some domain name which they're using for blackhead techniques. And after the update, they told me, well, we are back in ranking on top. So there are a lot of cases of that. OK, I'll give you an example. I'd love to take a look at that. Because I am regularly in touch with the West Bank team. And some of the quality teams are based here in Zurich. So we're in touch with them as well. And the more examples we have, the more likely they'll be able to think about what would it take to make some algorithm changes here. John. Hey, John. Hello. Go ahead. Yeah. Hey, John. I have a very quick question about multiple CTA buttons in one page. So in one page, I have implemented multiple CTA buttons which are leading to a single page. And which is there in every scroll. So will there be any positive or negative effect from Google? And mainly in mobile, I wanted to know. And you're implementing multiple what? Multiple CTA buttons. What kind of button? CTA buttons. Oh, I called the action buttons. That's more of a usability question, I think. Totally up to you. I would test that. If that works for you, then that sounds good. John, back on that previous question from the gentleman before. The 301 clone replacement kind of technique. I had that happen to a site five years ago on a very large site where they clone the entire site. Then they built 1,000 spammy links to their site. And they suddenly, their site simply replaced ours. Can you explain what that was? And if you guys have done anything to address that particular problem? Well, is it asking me or you? Oh, I was asking John. I don't know what might have happened in that specific case. So that's definitely something that we work on from an algorithmic point of view. And it essentially comes back to a question of canonicalization. If we have multiple URLs and they have the same content, which one of these URLs is the one that we would show in the search results? And that's something where the team is actively working on that. And they've been working on that essentially all the time. And that's something where we take into account multiple factors to try to prevent any kind of abuse happening in cases like that. And when we do get alerted of issues like that, where some other website takes over essentially the search rankings for existing websites, then that's something that the team takes very seriously. And they work very hard to figure out, what are the signals here that tell us that this is not the main move or this is not the same owner? But essentially, that tells us this is something that we should be ignoring. Or maybe even further, saying, well, this is something that we should be picking up as a spam signal rather than as a quality signal for that website. So is the quickest way to solve it to report them through a spam report? Generally, what I would do in a case like that, where you're really seeing the whole website being replaced, is maybe just send it to me directly. So you could do that directly by email or through Twitter or whatever so that we can take a look at that specifically there. When it comes through a web spam form, then that ends up being processed by the web spam team, which might not necessarily be able to escalate that to kind of the engineering team that's working on the whole dupe elimination canonicalization problem. Thank you. Let me see what else we have in the questions a little bit more time. So maybe we can take a few more. Oops, not loading. OK, let's see. I recently discovered that 404 errors on my e-commerce website are generated by users that delete their products. I'm thinking of answering with a 410 page to stop crawling up those pages. Is that a good idea? You can do that. You don't necessarily need to do that. So in the long run, we see 404s and 410s as the same thing. So we drop those pages from our index. We end up crawling them a little bit less frequently. So if turning that into 410 is a lot of extra work on your side, I would just keep it the way it is. That's not necessarily something bad. How would Google treat pages with analytical charts where the data is mostly rendered with JavaScript charts and data points? Very useful for users, however, less with grammatical sentences. Google tried to understand the charts to be useful information. No, we don't try to interpret the charts to see what this chart is showing and whether or not these numbers or these graphs are useful and correct information. But rather, we try to collect the kind of associated maybe indirect signals on there. So things like text on the page. If you have just one chart with no title, no additional text on the page, and it's very hard for us to understand what this is actually about. So if you have descriptions, if you have all text for images, if you have titles, captions on pages like this, that makes it a lot easier for us to understand what this page is about, where we should be showing. Similarly, we have kind of the indirect things, as well, where if people really love your charts, then they will recommend them to other people. And we can sometimes pick that up as links to your pages. So that's something where we can also find out a little bit more based on things like the anchor text that's used, the text around those links, so that we can better understand what this page is actually about. So my recommendation, they're kind of similar to any page that has a lot of images on it, is really to make sure that you're providing the context through all of the additional methods that you have available. And don't just rely on the content of the image or of the chart to be what is used for crawling or for ranking in a case like this. So does adding an image to an otherwise image list page help directly? It doesn't change anything for web search. It can mean that this image would show up in image search. So if image search is what you're looking for, if you're seeing people using Google images as a way to visually find content, then that could make sense. If you don't care about Google images or people are not searching for your images or searching in a visual way for your content, then maybe that doesn't change anything. Does changing the text of older internal links to help users understand them have any downside? Can it hurt the rankings? No, that sounds perfect. If you're updating anchor text internally to make it more easily understandable by users, then usually that also helps search engines to better understand the context of those pages. So I would definitely go for that. The one thing I would avoid doing is changing anchor text into an image. So if you have, I don't know, if you have a fancy font or something that you want to use on your pages and you change a link from being a text link to an image link and you don't have any textual kind of connection with that image or that link, then it's really hard for us to understand what the anchor text is supposed to be. But if you're just changing the text, like the writing, and you're changing the anchor from one tag name to something else, that's perfectly fine. Would you recommend using canonical tag or 301 to transfer low quality content to new articles? So in general, if you're creating a new article that replaces an old article, then that would be a case for 301 redirect. Because you're replacing something, you could also just keep the same URL and just update the content there. So that's something where you don't necessarily need to move things if you're improving them. You can keep the same URLs. Usually it's even better for us if you continue using the same URLs because then we can build up on those existing signals rather than having to think about how do we transfer these signals to the new URL? I discovered via a site checker tool that I failed a bot check. I don't understand why. I think this is something you'd probably want to look into with the tool that you're using to better understand what it is they're actually testing and how they're testing. If your site kind of didn't pass whatever test that they're doing, then you need to look into what those tests are and kind of try to take action on that. So I can't really help with these third-party tools. We had a short-term issue with a domain redirect after rebranding and changing domain name. In the beginning, the old domain was redirected to the home page without keeping the URL path. When we noticed this, we set up the redirect to keep the path. And the first three weeks, everything was redirected to the home page. How long could it take Google to recalculate these internal links and figure that back out again? So generally, as you probably noticed, redirecting everything to just the home page is a really bad practice because we lose all of the signals that are associated with the old content. If you're just saying, well, the home page replaces all of these lower-level pages. So that's really bad technique. I strongly recommend if you do any kind of site move to really double-check that you're redirecting on a one-to-one URL basis. And especially if you're just changing domain names, really to make sure that all of the old content redirects to exactly the same thing, new content on the new domain. So as much as possible to really make it clear to Google's algorithms that you're not providing a new website, you're not changing anything general across your website, you're just moving everything from this URL to a new URL. And if we can recognize that it's really a clear one-to-one move, then it's a lot easier for us to say, oh, we will just take all of these signals and pass them on to the new domain structure because it's really one-to-one exactly the same thing. So I assume you kind of noticed that as well. With regards to how long it takes for Google to kind of clean that back up again, that's really hard to say. If you've had this issue for a couple of weeks, then probably we've recrawled a lot of your website and kind of run into this stumbling block here. So I assume we're going to have to recrawl and re-index pretty much all of your website to understand that the new structure is pretty much the same as the old site structure and to take that into account. So I wouldn't be surprised if you had this problem for, you mentioned three weeks, and you've cleaned it up that it takes at least another three weeks, maybe a couple of months even, for everything to be kind of more or less back to normal. And another tricky aspect here is that when you're looking at a time frame that takes a couple of months for things to be settling down, then it's hard to make the assumption that the final state will be exactly the same as it was before. Because in that time, things can change in search as well. So it might be that maybe your site is additionally affected by one of the algorithms that we launched, which could be going up or going down. So that's something where the final state after a couple of months could be different than the initial state that you started with just because of normal organic changes in search as well. But I really suspect that this is something that will take a few months to settle down if you've made such a kind of a strong, I'd say, mistake with regards to the site in there. We recently discovered that many no-index follow URLs on our website are being canonicalized, and the reference canonical URLs are 404. Let's see. Do we no longer need to worry about no-index with canonicals on them? I don't know exactly what you're seeing there. If the URLs are returning 404, then that sounds like those URLs are not just no-index, but they're actually 404 pages. So in a case like that, they would fall out anyway. In general, with regards to no-index URLs, what can happen on our site is that we would treat them as soft 404s. So not as clear 404s where you're returning a 404 code, but rather as pages where we see a 200 result code, but actually nothing indexable is shown there. Sometimes that's an error message that's shown. That's just returned with the wrong result code. Sometimes that's something where it's a normal page, but it has a no-index. And we think, well, this page has been no-index for such a long time. We don't actually need to keep anything here. We can treat this as a soft 404. So that's something where essentially those pages would drop out of our index completely rather than just being seen as a no-index page and still having some signals associated with it. So with all of that said, I'm not quite sure which direction your question is headed there. I'd recommend either maybe posting some more details on Twitter or in the Webmaster Help forum so that we can take a look to see what exactly you're seeing and whether or not that's a problem on your side or not. OK, wow, still more and more questions. Maybe I'll just open it up to questions from you all here. If there's anything left on your mind first before I start digging into. I have one. So I've seen that in the most recent version of Chrome, the Canary version, you do support lazy-louding of web pages. And I remember a long time ago, Matt could say that it's not a problem when you do lazy-louding of content. And the bot should handle that correctly. However, when I see the rendered version in the Search Console, I'm not seeing my images or I'm seeing just the first image. So I'm not sure if it's good if I use lazy-louding. For me personally, I do prefer using lazy-louding. But I'm not sure if the Google bots are rendering correctly the whole page. And do you recommend using lazy-loud just for the images or for everything? Basically, that's not on the first primary content. Good question. So I think first of all, the Chrome side of what is happening there is more a matter of making it so that pages load quickly for users in the browser. And that would be independent of anything that you do on your website. So I wouldn't worry too much about what Chrome is doing there. I think they're doing some fancy things to make it so that pages load a lot faster, which is always a great thing. But that wouldn't be reflected in Google Search, at least not at the moment. With regards to lazy-loading images in general, it's important to make sure that Google is able to find the image source tag on the pages directly. So depending on how you're doing lazy-loading, it might be that we're able to pick that up properly. It might also be that we're not able to pick that up. So in particular, if you're using JavaScript to add the image source tag to these pages, and that JavaScript is not triggering because when we render the page, it doesn't get to process that. It doesn't get an event trigger to change the JavaScript or to change the image for that page, then it might be that we don't see the image tag when we render the page, and accordingly, we would not see the image tag for image search. So one workaround that you can do here is to use the NoScript tag to include the image tag. That's a really simple approach. Another thing you can do is to use structured data to give us the images so that even if we don't see them when we render the page, we know that they're associated with this page. We can use them for Google Images. So those are kind of the two main approaches there. We're also looking into ways of documenting this a little bit better so that you know which way of lazy loading works for Googlebot and which way doesn't work for Googlebot. So hopefully that will get a little bit better. In the meantime, what you can also do is test it yourself using something like the mobile-friendly test, where you can render your page with the mobile-friendly test, and then look at the source code of the rendered page. So that way you can see in the source code that Google has generated is the image tag there, or is still the placeholder there that you use for lazy loading. So that's one way to kind of double-check that. But in general, I think lazy loading is a great way of speeding up pages, so I wouldn't be deterred if it's a little bit harder. It's probably still worthwhile. Another thing also to keep in mind with regards to lazy loading is that there's not just Google that's interested in images on the page. A lot of the social media sites also try to pull out images. And if you're lazy loading in a way that works kind of OK for Google but doesn't work for any of these other sites, that might still be something you'd want to look into. As far as I understand, the NoScript workaround should work for pretty much every site. I have also another question. Maybe it will be interesting for you. So I'm seeing that you're trying to push AMP to people and in a good way. So you also have the page speed update, and you're taking that into account for the rankings. I have a WordPress theme that's built with speed in mind that is making 98 score for both mobile and desktop built-in without the use of any kind of additional caching. And the AMP version, which is made by the official AMP plugin, is scoring 65 for 75. So it's basically lower. What would you advise me in that matter? Because the AMP version looks to be performing worse than my original solution. OK, congratulations. I mean, congratulations because you made a really fast site. I think that's always fantastic. I know that's really hard to achieve those kind of high scores. So I think that's pretty fantastic. So what I would look into there is also does it really make sense for you to have an AMP version for those pages? Is it something where you get additional value out of having the AMP format in the sense that Google can serve a pre-cached version to users to make up for the page speed insight score through pre-caching, pre-rendering that's possible with AMP pages? The other thing to perhaps keep in mind is, is there any additional benefit that you might have from using these AMP pages? Is it something that we would be able to show in maybe different search elements in a different way, or is this essentially just another page from your website? And based on those things, that might be something where you'd say, well, it makes sense for me to focus more on AMP to make sure that those are also pretty fast, maybe to find a different way to produce those rather than the generic WordPress plugin. Or you might say, well, actually my existing content works just as well in search. And I will just focus on that and maybe even remove the AMP plugin because I don't actually need it. I don't have any additional value out of that. So I would definitely, as with any technology that's out there, even if it's promoted by Google, take a critical look at it. And think about, does it really make sense for me, or is this just something that is available? And I could be using it, but I don't actually have any benefit out of it. Thanks. All right. So time-wise, we kind of managed to get 20 minutes over. So maybe it's time to take a break. I love that you all are still sticking around. That's always a good sign. But let's take a break here and pause for the weekend, I guess. Depending on your time zone, maybe it's already Friday evening. Maybe it's still morning. Maybe it's in the middle of the night. Any case, I wish you all a great weekend. Thank you all for joining. Thanks for all of the good questions. And for those of you who have more information to send my way, thank you ahead of time. I'll take a look at that with the team here to see what we can do. All right. Thanks, everyone. Have a great time.