 All right, welcome everyone to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland, and part of what we do at these Office Hours Hangouts, together with more masters and publishers, like the ones here in the Hangout. Some questions were submitted already, not as many as we usually have. So maybe we can get through to most of these. As always, if you're kind of new to these Hangouts and want to jump on in with the first question, feel free to jump on in now. Nobody? OK. I'm old. You're old. I didn't say that. Well, OK. Go ahead. So it looks like about five days ago that now the feature snippets for the mobile results could be powered by AMP. Was there any change that you are aware of officially that you want to confirm that that happened, or there was no significant change that you were aware of around that? So it used to be that the mobile results feature snippet would just link to a not AMP article. Now, if you compare the query on desktop versus mobile, sometimes Google is going to show an AMP version from a completely different URL than the desktop version for the featured snippet. So for example, as you do a search for how to use Twitter, you do it on desktop, wire comes up, you do it on mobile, Forbes comes up. OK. I have no idea what specifically is changing there, but in general, this is an organic search feature. And like other organic search features, we can potentially show AMP pages there. So I wouldn't be surprised if we show AMP for some of those. Hi there, nice to see you. So thank you. I don't have any official confirmation or denial. I mean, if people are seeing it, then obviously, I can't really say much that it doesn't exist, but in general, the featured snippet is an organic search feature. And we show some sites there. We don't show all sites there. Sometimes we show something like this there. Sometimes we show it slightly differently. On desktop and mobile, it can look a little bit different. That's essentially completely normal. It's Google. Oh, Google. Yeah, I mean, we try to figure out what works. And sometimes we do experiments, and we test things out and try different variations to see, does it make sense? Do people react the way that we expect they react? What are the long term metrics around this to figure out what we should be keeping and what we should be swapping out? I was calling some really cool stuff with AMP. Yeah, just the Google Toronto there. Yeah, it's pretty cool. They're really quick. I mean, this is something that I think all sites should be doing, this AB testing and constantly iterating and figuring out what works, what doesn't work, and not sticking to something that worked for the past 10 years, and therefore, will keep it forever. And kind of this constant movement, I think, definitely makes sense and is a lot easier on the web compared to something like a native app, or you have to push updates, or all of that. All right, let me run through some of the submitted questions. Can Google detect or see if a page receives traffic outside of search and use it in the search algorithm, like we have mobile users from Android or browsers or bookmarks? In general, we don't use things like Google Analytics. So that's kind of one part of this where we might be able to see that. As far as I know, we don't use anything else. Similarly, when it comes to search either. Why isn't our homepage in the first position when we search for a brand name? Here's a link to forum thread. So in general, this is something that's sometimes tricky. So especially when it comes to things like two kind of normal words when they come together, and we see a lot of different variations of results that might be possible there. That's something where sometimes we show one site, but sometimes we show another site. So when I try it here, and just now it actually shows your homepage on top. So maybe that's OK, but this is from Switzerland. So probably that doesn't help that much. In general, what I noticed with your site, I think there is another thread, another question there as well, is that you change domain names at some point, and some of the content is still live on the old domain. So what essentially happened there is our algorithms kind of noticed that part of this domain moved over, but we couldn't move the whole site over because there's still content that's live on the old site. So that's one thing I try to kind of figure out where the content isn't redirecting, and either 404 that or just 301 it as well so that we can really forward all of the signals that we have for the old domain and forward them for the new domain. So I believe if you search for the old name, we do show your new domain as well. Probably on top, as far as I remember, but this is one of those things where if you do a migration and you don't do a clean migration, then sometimes you see these kind of after effects where it's a bit tricky. But the other thing, kind of like I mentioned in the beginning, is when you have a brand name that's not that unique, then sometimes it's tricky for algorithms to recognize this is actually a brand name and not two normal English words that could potentially be matching a bunch of other sites and pages. Can I transition into that question with a rich snippet brand name issue, or can I just take it out later? I think, Chris, you're from the site, right? Yes, I am. Thanks. Does that help a bit? It does. Yeah, with the two words that makes a lot of sense what you're saying about the two words, maybe registering for other sites, but what's interesting about that is that the other sites that rank are sort of aggregators in our industry. So they would also be showing storage facilities, but they happen to choose ours, and they become the ranking result. So I was wondering if there's something that we can do to sort of influence that, and maybe we don't have the right circles on our site to say that we're the real authority for a life story. Yeah, I think that sometimes it's not that easy. So let me see if I can look at the English results. Let's see what you're looking at. So when I search for it in English, I get your site on top, too. So kind of tricky to see. So that's like the sparefoot.com. Is that what you're looking at? Yep. Yeah. So I think anything you can do to kind of clarify that you're the original source, which is making sure that the Wikipedia entry is clean, that you have the local business entry set up. I think a lot of that you already have covered. But all of that kind of comes together for us to kind of figure out this is actually a brand name or a specific term that belongs together, and it belongs with your website, which is kind of what people are searching for rather than two individual words. So kind of going through all of the places that you would normally kind of mention your business or where you might have mentioned your business in the past with the old domain name and making sure that all of that is updated to point at your new domain name as well. I think that should help. But sometimes this is something that takes a bit of time to really update. What do you do about a rich snippet that's hijacking a branded term? How can a rich snippet hijack a branded term? Using basically when you enter the page, people think. So for instance, if you look at Coca-Cola pricing and there's a third party that's using that same article and people are coming to that article thinking that it's that company using their logo in that rich snippet. So they're saying the rich snippet term for that. With which part of the rich snippet are you looking at, like a breadcrumb or an image? Yeah, a breadcrumb, yeah. But I mean, you have the breadcrumb plus the image there. So for instance, if I'm writing an article, like a company versus this company alternatives and then I'm seeing that I'm using the other branded company's term and then I land on that rich snippet landing page, I'm already not in a page for what the person was looking for. So does I see a lot of that happening and I was wondering, what do you do about that? I don't quite follow what you're seeing. So I'm just wondering, like one thing that we sometimes see is like someone else's ranking for a product that we're selling. And sometimes that's just the way it goes. That's not fair. I don't know. Like sometimes people write about a product that people are selling, especially if you're looking at kind of like a product name and then some alternatives or product name and then other words together, then sometimes that's something that other people would rank for or like a product name plus coupons, for example, that's kind of normal that you would see these coupon sites ranking there instead of your page where maybe you don't even have any coupons. But I don't know exactly how you mean with regards to the rich snippet. So if you want, maybe send me a screenshot and I can take a look at that with the team here. All right. So I received this one by email after site receives a manual spam penalty and they fixed the issue and they're re-included in Google. How long does it take for a site to recover? What happens to the traffic after the recover? So in general, in the most common cases when the manual action is based on something that's on the site itself and you fix those issues and the Webster team is able to kind of confirm that those issues are fixed and re-include the site, remove the manual action, then usually what happens there is that we have to kind of reprocess the site briefly which usually is pretty fast and then we can show it essentially at the same position as before. So that's something where when those issues are resolved we can show it completely normally in the search results. I think the same position as before might be kind of misleading in the sense that if the position before was based on something tricky that you were doing on your pages and you remove that trickiness from your pages and obviously it might rank a little bit different. So a common example there is when a manual action is based on links to a site and we were showing a site maybe higher in the search results because of the spammy links, the Web spam team took action on those links and kind of took them out of their equation and if the Webmaster goes out and fixes those spammy links, removes those spammy links then obviously those spammy links are not going to help that site anymore. So it's not the case that the site would jump back up at the higher ranking again. It would just rank normally now without being held back from any manual action there. So that's something that usually happens fairly quickly. There's one exception where it sometimes takes a little bit longer and that's when we remove a site completely from indexing because of a manual action. And this is something that's more, it's fairly rare and it's kind of an extreme situation. It's when the Web spam team really looks at the site and says there's absolutely nothing on this site that is worthwhile for us to even crawl or index. So we might as well essentially ignore it completely. And that's something that's usually reserved for situations where it's really just like a pure spam domain. It's just aggregated spammy automatically rewritten content and there's nothing of value there at all. Then in those cases, when the manual action is lifted, we first have to start crawling the site. We have to start crawling and indexing the content from the site because we have nothing left essentially in our index and that's something that can take a bit of time. So maybe I'd say like order of magnitude a couple of weeks maybe for that to start showing up again. But the more common situations where there's just like something subtle on a page that's spammy, maybe structured data on a page that's spammy and that's resolved, then that's something where as soon as we reprocess that, it'll be completely like before. So John, in such case, if the website is pure spam, Google does not even crawl the website. There are extreme situations where we do that, yeah. So that's something that's really rare. Sometimes you see these posting in the Webmaster help forum but usually that's not something like you accidentally run into that. That's really something where like a spammer goes off and buys a thousand domains and just automatically fills them with content. Then at some point, we just say, well, there are like a thousand subdomains on each of these domains and all of them are just pure junk. Like why would we even bother wasting time on this? So that's kind of that situation. And usually the situation where Webmaster runs into this and tries to resolve that manual action is when they buy the domain name from someone else or from a domain registrar that doesn't say, well, this was like previously used by a spammer. I mean, usually you don't see that off hand. So if you buy a domain name that was previously used and through other ways you can recognize that there was like a ton of spam on there and there's a manual action for pure spam in Search Console then maybe you're kind of in this situation. And that's something where once you do that reconsideration request, it can take a few weeks to actually start showing up normally again in Search. But now you guys are a part of ICANN, right? So you guys can see more information. What do you mean? You guys sell domains as well. You can buy a domain through Google. I think we sell some domains, but not all of them. But I mean, the tricky part here is, of course, if there was really pure spam on this domain and we removed it completely from the index then if you go off and research that domain name, you might not realize that there's actually a bunch of junk associated with the domain name from before. So maybe archive.org has some information that's useful there. But in any case, once you've verified ownership of the domain, then that's something you'll see that there's a manual action there and that you can address that manual action. And maybe one thing also to keep in mind with regards to manual action on a new domain that you pick up, it's important for a web spam team to see that the domain is actually used for something legitimate. So if you buy a new domain name and you see a manual action and you don't have any content up at all and you submit a bank reconsideration request, chances are the web spam team will say, well, there's nothing here that we would show. It's like, why do we need to even bother with a reconsideration request because there is no content that we would have to index. So first get some content, put it on, and then do the reconsideration request. There's a bit of background noise, I think, from one of you. Let me see if I can mute you or not. All right. OK. So off to the next batch of questions. My question has to do with brand power effect on rankings. Wow. That's a complicated phrase. Can an established website with a relatively weak brand but great technical aspects aspire to top rankings when competing with stronger brands on Soso's site efforts? Does brand, in this case, trump technical aspects? I think this is a tricky question because you're essentially looking at very different aspects of our search algorithms. And we use a lot of different aspects. So officially, we say we use over 200 different signals when it comes to crawling, indexing, and ranking. And some of those might say this is a really great website, but actually this part is really bad. And other websites might have this part really good and that part kind of bad. And combining the two and finding out where we should rank these in a search results page where we can't put the results next to each other, that's sometimes tricky. So there are definitely things that can help a website, but there are also some things where, especially when you enter an established market, it's just very hard to kind of surpass all of the other sites on there. And in general, even though we have a lot of recommendations with regards to technical things that you can do on a website that are best practices, it's not the case that just purely following the technical best practices will make your site jump to number one. So especially when you look at it from a user's point of view, that doesn't always make sense. So if you're comparing one page that's really, really relevant, but it's an HTML page that was made in front page four or some real ancient system and it doesn't have any structured data markup and it uses tables for the layout, but it's really fantastic information. Compared to a page that uses a really modern layout, really modern framework behind it, there's really kind of like a modern site that's set up really great from a technical point of view but doesn't really have that relevant information. Users probably prefer to see the more relevant information than something that's less relevant, but more modern or more valid or any of that. So from that point of view, it's not the case that just by following all of our technical recommendations, your site will jump up to number one. But it's definitely the case that if you do follow these technical guidelines and recommendations that we have, then it's a lot easier for us to actually understand, well, this website is really about this topic and then we can understand your content a lot better and figure out where we should be ranking that within the search results page. So it's definitely a great practice to follow those technical guidelines, but it's not the case that just by following the technical guidelines, your site will automatically be number one and the best website of its kind out there. You really need to make sure that what you're providing is really relevant and useful and compelling and unique for users as well. A question on app indexing. When searching for a specific product on my mobile and clicking one of the search results, I sometimes get deep linked into an app, which I have on my phone. This app then shows a search results page with multiple variations of that product instead of that single product page. I like that behavior because it gives me more choice, but does this get considered as a type of cloaking? Is there even something like cloaking within apps? So I don't know what the current status is there with regards to app indexing. I know maybe a year ago or so when the app indexing stuff was more prevalent in Search Console, we had kind of the concept of the content having to be equivalent on the app as on the mobile page. I forgot what the name was. But that's something where we had a number of algorithms that were set up to kind of double check to make sure that the app content that's shown in the app is actually equivalent to the content that would be visible on the web page itself. And in a case like that, we perhaps would have caught something like this. It's really tricky with regards to apps and web pages because sometimes the UI and the design of an app can be very, very different than a website is. So that's something where it's sometimes hard to understand where there is a mismatch in content and where the content actually matches. So my recommendation there would be, if you are working on app indexing, check out Firebase, where all of the app indexing is currently located. They also have a support page where you can submit support questions directly. And I believe a certain number of questions you can submit for free. And you get an answer directly from the Firebase team. So that would probably be the best place to get an authoritative answer on the current status with regards to app indexing. Can one get a penalty or manual action for adding a red frame around specific organic search results with a Chrome extension? I've seen some shopping Chrome add-ons that do that to attract customers to click on their results. Couldn't find anything in the SEO world, but it might go against the Chrome Store guidelines. I don't know how the Chrome Store handles things like that. So that's hard for me to say there. At least from my point of view, this would not be something that would be associated with the website itself. So this wouldn't be something where the webmaster guidelines would kind of step in. It would probably more be something within the Chrome Store side of things there. I know there are a number of toolbars or Chrome extensions that add additional information into the search results page. So maybe that is all kind of a similar thing. And that's kind of what you're aiming to do. I really don't know what the guidelines or what the rules are there with regards to the Chrome Store, though. Aggregators are ranking above my original content. So there are some sites that aggregate a portion of our content. And then they link back to us. This is very common across the finance domains. And our website is essentially ranking below those aggregators. Similar to what I asked. That's, I guess, kind of similar. Yeah. So in general, I took a quick look at the site to kind of double check. But a common situation when I see that happening, especially with regards to aggregators or other news sites covering something that you also have posted on your website, on the one hand, this is something we'd like to improve a little bit to make sure that we recognize the original source and show it appropriately in the search results. On the other hand, it can be very tricky when our algorithms think that one website is really low quality. And maybe other websites are seen as higher quality. So for instance, if you take a website that overall is very low quality and has individual articles that are kind of OK, and some of those individual articles are aggregated or posted with a snippet on some other website, then it can definitely happen that that other website that, perhaps in our eyes, overall has a higher quality threshold or is of higher quality overall, could rank above this lower quality website where you kind of have more information. So that's something where it's hard to find the right balance there. But usually this is a sign that you need to work on the quality of your website overall. And working on the quality of a website overall is sometimes tricky. There is no single meta tag that you can tweak to make that kind of automatically work. And to some extent, this is something that I don't think is completely solvable, even from a practical point of view. For example, we see this all the time with some of our content on our blog that people will write about it. They'll take snippets from our blog, and there's this one guy here who does that. And they'll put some additional information in their blog posts, and they'll quote our blog, and then suddenly their content ranks above our blog. And from our point of view, is that like bad? Are we doing something wrong? Is that an error in search? Or is this essentially OK? Because if someone else is taking content from our blog and adding additional information, or if they have a bunch of comments, like people who are talking about this there, people who are sharing this on Twitter, then maybe that's the right result to show on top in the search results. If a PC is selling for $18.99, and the other guy is selling it for $19.20, and the article is about, hey, come to us. And people are like, well, I'm going to go get that PC for $18.99, whatever, because it's ranking above the first result. If it comes down to situations like that. I think that's a bit different, though, because then you're essentially saying, well, this is something different that you're providing on these pages. But especially if you're talking about like a snippet of content, and that's used on other websites, and they're providing additional information, then that can be perfectly valid. It's like if I write a report about Shakespeare, and I provide a lot of detailed background information on that, then maybe my report about Shakespeare should be ranking above the original Shakespeare piece, because it does provide a lot of additional things that weren't available in the original article. Are there any consistency to rich snippets, staying consistent like number one? I mean, well, not moving in terms of changing, because I mean, rich snippets do change. But like machine learning, just the query, that hasn't changed in a long time since it's a Wikipedia article. Do you have any consistency? Because do rich snippets, I mean, how often do you change them? How often we change? Because the number one result doesn't change at all. Like I haven't seen the number one result change in like five, six months since the last whatever shakeup. Well, we make changes all the time. So it would be kind of normal to see these things change. And I guess sometimes there are results that are more kind of stronger results from our point of view, where it's harder to say, well, we should be shifting these around just for the sake of shifting things around. So sometimes it does make sense for one result to remain on top for a while. OK. Hey, John. I'd like to ask you a question about the similarity. So let's think like there is two sites that are kind of like fulfilling the same purpose for different markets, but in the same language. And they are owned by the same organization. And it's easy for Google to understand they are owned by the same organization. Is it OK if they kind of like use the same code base by using the similar JavaScript libraries and CSS and like maybe CDN and stuff? It doesn't necessarily make it just really similar. And that's why kind of like near duplicate, yeah? It kind of depends. So usually what happens there is especially if you're talking about like two sites and they're kind of similar, then that's something where it's more a matter of like from a technical point of view what we do when we show that in the search results. Do we filter them out? Do we do we fold them together? Do we pick one? Do we show both of them? It's usually more a matter of things on that level than something like from a web spam point of view. Sometimes there are situations where we see a number of pages that are essentially the same and they're crossed multiple sites and we can recognize that these are actually multiple sites and they're just used to kind of provide more presence in the search results. Then those might be situations where the web spam team will say, well, this is kind of like doorway pages. And we will take out all of these duplicates and focus on the one primary source and try to solve it like that. Yeah, it's definitely not doorway pages. So I was actually fighting against doorway pages for five years. But it's more like they have different titles, different content, different pieces of data. But they serve the same kind of user intent for the same market, for the same language, and owned by the same company. But they offer different added values. But I don't know how Google understand that added value. Well, if we can't understand the added value, then probably we will try to fold them together. But I think a lot of these cases are also such that you kind of lose out when you dilute your content across like two or three different copies with different tweaks compared to making one really strong version of a page. So that's something where it might be worth considering, like, should we be folding these together and actually have something that's really strong and ranks a lot better overall than by splitting things apart into multiple things that kind of are OK-ish on their own, but none of them individually are really fantastic. So that's kind of what I would aim for there. What sometimes also happens is we look at pages or usually websites overall. And we see, well, in general, this website is built on the same server as that website. And if you use a path from this website and you put it in the other one, then the same content shows up, then our systems might say, well, maybe these websites are actually identical and we'll fold them together. So that's something that kind of happens when you look at things like using the same foundation, same framework, same database setup, where if we can really swap out the URLs and we see exactly the same content, then we're trying to make it easier on you by folding things together so that you don't have to think about the rel canonical and all of those things. Yeah. And there is a complete different question, but also it's about Google. So again, I don't want to mention that thread update, but still after something happens, let's say, I was looking at more than 200 websites on Search Console looking in their data and trying to find some pattern. And then at the end, what I found was if the website is really old and there are too many links, although the quality of the links, I don't know, I can't comment on, but let's say millions links and the site is really old, like eight, 10 years old. They kind of got a better ranking after that update, but the other sites that are a bit more younger and have a smaller, let's say, link profile, they start losing rankings. So do you think this might be related? Is it the pattern reading? I don't know. So the name Fred was kind of used for a whole bunch of algorithm updates that I don't know, started in the spring. And since we make so many updates all the time, it's from my point of view, it would be impossible to say this specific change is associated with the name Fred because it's like so many different things that come together that we've changed since then. And people kind of combine all of these into one name Fred, which from my point of view, doesn't really make it a lot easier because sometimes there are obvious quality issues that we can pick up that algorithms can focus on. Sometimes there are obvious link issues that our algorithms can focus on. But if you put all of these together and say, well, I found five websites that are like this, therefore Fred must be like this, that's I think kind of misleading. But it's not to speculate. So. Then what I'm gonna use, I think we need to, you guys need to create a new blog post. This time it should be the time question instead of 23. 23 is too much? Yeah, it's too much, yeah. What is it? Top seven reasons and you won't believe number 30. I don't know. I mean, we make these algorithmic changes all the time. So it's really kind of tricky. Okay, let me run through some of the other questions that were submitted. Some of these are really easy. Does the priority on the crawl, error screen and search console actually mean anything? Yes. There's actually a blog post, crawl errors. Back then it was called Webmaster Tools. Let's see if I can find it. Maybe not. Oh yeah, crawl errors in the next generation. That mentions some of the things that go into what we use for priority with regards to crawl errors. And usually the idea is to try to recognize things that are more relevant for users that users are more likely to see. So if the top crawl errors with the lowest priority number there are essentially irrelevant URLs that nobody cares about that are like random URLs that happen to be made up on the web somehow, then chances are the lower ones are not going to be more interesting either. So, I don't know, I'd recommend double checking that blog post because that mentions what we're kind of looking at there. I can't find it like off hand. Okay, does the 50,000 URL limit in psych maps include alternate linked URLs? The limit is based on the number of lock entries or elements that you have in a psych map file, which is the base pages. But in general, psych map files are validated right after you submit them. So if you submit a psych map file and it says it's okay, then it's okay. Whereas if you submit a psych map file and it says too many URLs, then obviously you have too many URLs. So this is something that's really easy to double check. What is causing an influx of 403 access denied errors in Search Console? Usually that's from your server. So we don't make up these errors that we show in the crawl error section. So if we're reporting on a number of 403 errors for your site, then that's something that we saw when we crawled your website. Sometimes this is due to things like bot protection or denial of service protection that maybe your hoster or your blog or your CMS has built in. That might be worth double checking. That makes it a little bit tricky because when you check those pages yourself manually, you're like, well, this page works perfectly. And Googlebot still, when it crawls normally and tries to fetch a thousand pages, then it might trigger this kind of bot detection or denial of service protection and show these kind of errors. So that's something where probably you need to check your server logs to see what it was actually happening around that time. And sometimes you can kind of double check it with the fetch and render feature in Search Console to see if at the moment Googlebot is blocked from access for these specific URLs. Please update the SEO starter guide. Yes, we will. We actually have some work being done on this and I hope it's not too far out. So it should be coming at some point. Most of the content there is still kind of okay, but things like the mobile website, mobile pages, information there is a bit based on like the older state of things when everything was very different on mobile. So that is especially something that we're updating. We have five different websites, running shoes, dress shoes, boots, insoles, and socks. And we want to get them all into one big website. What's the correct way to do this? 301s, yes, redirects is perfect way to do that. The tricky part here is whenever you're combining or splitting a website, you can't assume that the results will be like a sum of the individual pages. So if you look at the impressions for all of these separate websites, then sometimes like different websites will be shown in the same search results for the same query. So you can't like add up the number of impressions and say, well, the final website will get the sum of these impressions. In general, we have to reevaluate, reindex the whole website overall to understand what the new website looks like. So things like combining a website or splitting a website that can take a little bit longer to be reprocessed. What I would recommend doing there is making sure that you come up with a plan that you can stick to for the long run and that you do the switchover at a time when you're not relying purely on search traffic. So maybe you're doing either other forms of traffic acquisition, like you're doing a lot of ads or offline ads. Maybe it's the off season that you don't really care about search that much during that time. But in general, kind of assume that whenever you're doing this kind of merging or splitting across websites that it will take a couple of weeks, maybe months to really settle down properly again. John, one of my clients got this ad experience report and they are a big media outlet. So, and apparently after two times now the entire domain is flagged with an ad experience issue. But this also affects some subdirectories where there is no advertisement at all. So what is the effect? Because I also seen ranking degrees after a couple of days after this ad experience. So do you think this ad experience also might end up with decreased ranking? At least at the moment, that's purely informational. So that's not something that, as far as I know, we would use it all in search. But we do use things like the content above the fold for search. That's something that's fairly old. So maybe it's just kind of coinciding with other things that we're picking up there. But as far as I know, the ad experience report and the flagging that we do there is purely informational at the moment. And that's something that is probably worth cleaning up and fixing because if we're flagging that then chances are your users are getting upset as well. So that's something I definitely fixed. But at least for the moment, I'm not aware of us using that at all in search. And it seems manual because I've seen eight examples that has been sent to me, like eight screen shots and screen casts. So is it like a manual thing that the team is doing? I don't have the details there, so I don't know. Okay. There is a help forum for the ad experience report. So if you're kind of worried or wondering what's happening there, I definitely post there. Thank you. If Google finds a URL which contains a parameter that is set to doesn't change content in search console, does it ignore it or strip the parameter from the URL and then crawl it? Both, maybe. So in general, we take these settings from search console and we try to apply them, but we also kind of spot check things to make sure that you don't accidentally have a setting wrong in search console. So usually what would happen is we would strip that parameter and crawl with a simplified version of the URL and try to use that, but it's not the same as when you block a URL or URL parameter in the robots text file. So in the robots text, that's something that really blocks crawling, whereas this essentially tells us, well, for indexing, you should focus on the simplified version and for the most part, we'll do that, but we might still double check the URLs that do have this parameter. So if you look at your server logs and you make a change like this, chances are the number of crawls for that parameter will go down significantly, but they won't go to zero completely. So for the most part, for most websites, that's absolutely no problem. If crawling with that parameter is really a problem for your website, if that trigger is like a really complicated calculation on the backend, then you might need to block that by robots text in addition to the settings so that we don't even try to crawl those URLs. Whoa, lots of stuff coming. Okay, paid links are links from your own blog to your own e-commerce site on two different domains considered paid links and require no follow. No, if this is between a handful of your own sites, that's perfectly fine. If you have like a whole blog network and you're constantly linking to your e-commerce site, then that could end up looking like a private blog network, which would be against the webmaster guidelines. But if you have a normal setup where you have one blog, you write about your product and then you have your store and you say, well, I'm writing about this product here and it's clear that you're writing about your own product and you're not misrepresenting yourself and anyone looking at that page recognizes, well, this is like a link to a different part of their business, that's perfectly fine. Even from a subdomain to a regular domain? Whatever, it doesn't really matter, yeah. So some people put their blogs on subdomain, some put them on sub-directory, some on separate domains. It's really kind of up to you. I see sometimes blogs on the same domain, on the same host name make it a bit clear that this is actually belonging to this e-commerce site and makes it a little bit easier for us to say, well, all of this belongs together. But for the most part, we recognize that across subdomains as well. So kind of up to you. Sometimes there are technical reasons to go one way or the other, really kind of depends. Yeah, hi, John. Hi. John, actually I pinged you one location where I have been trying to remove one URL from indexing but from two months continuously it is appearing. I am using noindex, I am using URL removal tool but it is still appearing. So I don't have any idea why this page is ranking or coming in search. Which one do you mean? Maybe you can- In site link, first link. Okay, the first link, in the site link. See something? Sometimes it's easy to kind of double check why this happening, sometimes it's not so easy. I don't know, I'd have to double check to see what is happening there. Is that something where you added the noindex fairly recently or has it been long? I don't think earlier it's cache was coming up but two months back almost I just added noindex and since then the cache removed but this page appearing despite I used the URL removal and noindex both. Okay, I think this is, I don't know, I'd have to double check. Good, kind of hard to dig into the details. Let me see. Okay, we introduced a new page and wanted to target that page linked to it from the old ranking page but now both pages are ranking. If I remove the content from the old page are there any chances the new page will rank higher? So in general for a case like this I would redirect from the old URL to the new one because that's kind of what you're doing, moving content from one URL to another or at least rel canonical from the old one to the new one so that we understand how you want these put together. Just by moving the content itself and leaving both pages up it's sometimes a bit tricky because we don't really know are you moving or are you planning on keeping these both? What is kind of the long-term plan then? But John in such case this is home page so we want to target that specific section or folder. So I cannot redirect home page to folder. Then that's something that we'll have to settle down over time with normal indexing so that's, if you can't redirect it then basically we have the situation where we know the home page used to be really relevant for this query or for this type of question and suddenly the content is also available somewhere else then that's sometimes tricky for us to say, well the home page or this other page it's like you have both of them in there. So that's something that I would expect would take quite a bit of time to actually have our algorithms figure that out. So removal of main content from home page will help my other page to rank higher or other page will still rank on natural ranking and it will not get any boost. I assume the other page will just rank naturally. So I mean obviously if the content is removed from the home page then that's a strong signal for us. What definitely also helps is when the home page actually links to that page itself so that we understand this is a really important page, something that used to be on the home page and now it's here. Whereas if you link to it like through a long change of things and then in the end you come to that page and it's kind of like telling us this page is not really that important. But yeah, so especially if you can't move the URLs then making sure that the content is on there is probably the best step. Okay, we've seen recent months and increase in URLs blocked by robots text and a decrease in number of URLs being indexed. The robots text file has not been updated to the extent that would have led such an increase. What could that be? Hard to say without looking at the details. In general, crawling can be a bit weird in that maybe we'll decide to re-crawl URLs that we crawled like a really long time ago and sometimes that results in weird spikes in crawling where you like what is Google thinking and our algorithms are basically just saying, well, we know about these URLs, we should check them out again, it's been a long time and then suddenly we'll try to re-crawl a whole bunch of those. So that might be something that you're seeing there. It's really hard to say without looking at specifics. Why isn't Google excluding the backlink and ranking only based on the content and user signals? Is there still too much span? It's hard to say. So in general, we try to take a large number of signals into account for ranking because that way we can even things out and some sites are good here, some sites are good here and both of them might have a good place in the search results. So just by focusing on one aspect, I think would make things a little bit too one-sided. John, so I have a client who has 20 sites that are targeting 20 different countries and languages, but with the same setup, but of course different content, different backlink profiles and stuff. So a couple of them had backlink issues. So I identified they used to be in Penguin filter and now by looking at some data, now I understand they are not in Penguin, but so mysteriously two of them, which is Sweden and Denmark, I mean, they are still seems to be still under the filter and it's such a mystery because I did the exact same methods I applied for the other countries, which were like Brazil, France, Australia, India and India even had the manual penalty. So they are all removed, if that was Sweden and Denmark, although I made everything and their profile is smaller, they still seem to be affected. And I think because this profile is smaller, that means they have less amount of links now. Maybe I removed or let's say no followed, disavowed and made a CRM talk with webmasters, get them removed, but now there are less links. Do you think that should be the cause of it? Happened, yeah. So that's something that is really rare, but sometimes we do see that as well, where someone will say, oh, I know my previous SEO did some spammy backlinks, therefore I'm going to delete all of them. And obviously that doesn't have like such a good outcome. So that's something where you need to be kind of like, watch out what you're doing. And especially if there's nothing left that's really naturally supporting the website and you could see things like that where it continues to rank badly. And that's something where, yeah, it's tricky. It's not a simple thing that you can just fix. All right, let me double check the remaining questions if there's something short that I can answer. How much content is duplicate content on e-commerce website? I keep getting errors for duplicate content and thin content issues. I'm not quite sure where you're getting duplicate content and thin content issues errors. So if you're getting these as manual actions, then that's something where the website team is really seeing significant issues. And it's not a case that our algorithms are like skewed by two or three copies of the same thing. I track position on some keywords and can see some very large fluctuations on a daily basis. What can cause this, how to fix it. Sometimes when you're like scraping Google with keywords to see what the rankings are, you don't necessarily see what actual users would be seeing. So that's something where it's sometimes really tricky for me to kind of take feedback like this and say, well, this is due to this or that because if normal users don't ever see these kind of fluctuations, you're kind of looking at an artificial metric. I have a domain that's focused on providing content for one country, but subfolders contain websites for other countries. We have a lot of high quality photos. The expected output with this setup is that our photos are in websites for different countries. And therefore I'll have different alt tags depending on what the target country is. Is that okay? Or could that be hurting the ranking of our images? In general, that's okay. So we see images, especially if they're on one website kind of varied with different anchor texts, with different context texts around them, that's completely natural. That's not something I would really worry about there. One thing I might watch out for is to make sure that you actually have some good image landing pages that do work for image search. So take a look at the queries in search console and search analytics that lead to image search results for your website and try them out and see what we show there. Are we showing the right versions of the content or are we getting confused and perhaps showing like the English landing page for a query that's in French or in Japanese? And that might be something that you need to figure out what you could be doing there to improve things. And that could be something like setting up hreflang markup between the different pages so that we understand those connections a bit better. All right, so I think we're kind of over time. I need to take a break here. It's been great having you all here. Thanks for coming. Thanks for all of the questions that were submitted. We have the next Hangout lined up for Friday and a German one I think for Thursday and a French one later today and then a Hindi one as well sometime this week. So lots of Hangouts lined up. If you have more questions, feel free to drop the questions in the appropriate Hangout for your language. And I hope that was useful. Hope to see you all again in one of the future Hangouts. Bye, everyone. Good night.