 All right, welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do is talk with webmasters and publishers like the one here in the Hangout today and the ones that submitted a bunch of questions as well. So if you're watching this live, feel free to jump in. The link should be in the Google Plus listing. There's still room here. All right, I'll go through some of the questions that were submitted. And if you have any comments in meantime, feel free to jump on in. Or if you have more questions, that's fine too. All right, I need to know what means that Google Fetch and Search Console can't catch all the code of one HTML. Is there an issue? If my HTML has 1,239 lines and Google Fetch only shows 568, is that a problem? So there is a limit in the Fetch's Google tool with regards to the amount or the size of a page that's returned. It's not by number of lines, but it's by size. However, it's a fairly high limit. So that's something where it's a little bit below what we would use for Search, but it's pretty high. So I think it's a couple hundred kilobytes of HTML that it can fetch. So if you're running into that limit, then my guess is, for the most part, your HTML pages are just way too complex. And it's not so much that you would automatically see an issue in Search, but it's definitely something that will slow things down for users who are loading your pages. If your HTML itself is a couple hundred kilobytes large, then that seems a bit too large. I mean, not in the sense that we would rank your site less, but it just slows everything down. And that's usually pretty easy to improve. We have eight different magazines under one common CMS, and lately we found out that Google indexes articles from one magazine under the domain of another magazine. We have no idea why this happens and can't find out where Googlebot gets these wrong URLs. We've already checked the site maps. We've not found any URLs with this problem. What could it be? So we have multiple mechanisms to try to recognize when sites are duplicates of each other. And one of these might be catching something that you have on your site there in the sense that it might look to us that these sites are actually the same. And this is especially common, or I mean, it's not really a common issue, but it's especially something that could happen if you have multiple domains where using the same URL pattern, the same file name and extension and URL parameters leads to the same content. So this is sometimes a symptom of using the same CMS where if you use the same path on a different domain name that's running on the same CMS, then if it returns the same results as another one of those domains, then we might assume that these domains are actually duplicates of each other and that we can index one of them and kind of focus the content there. So that's what I would watch out for. Double-check your server settings, your CMS settings so that when someone accesses the content from one of your other magazines on that CMS on one domain, then it should return a 404 instead of returning the same content as it would otherwise. So really make sure that these different domains really have their own unique content and that they're not sharing in the sense that they're using the same URLs for the same content. I have a question about HTML headings and their structure and the source code. Hopefully you would agree that the recommended heading looks like H1, H2, H3. My question is, is it necessary to organize the headings in this order in the code? Or is it OK if the headings in the code appear in a different order? But they're rendered in the right order. So from our point of view, we use headings to determine the context of the page. But we're not picky in the sense that we say, oh, you can only have one H1 headings and H3 should be below an H2, should be below an H1. We're trying to essentially just group the different pieces of content together and understand which of these pieces belong together. And for that, headings are a great thing. Whether you start with H1 and then go to H3 is kind of up to you. Whether you start with H3 and then you have a different H2 in the middle, that's totally up to you as well. We're just trying to kind of understand the structure of your content. And it's not the case that you would see any kind of ranking change by shuffling headings around on a page. Can I ask one? Sure. OK, about search cancel, if that's the topic of today. When in search cancel, we are expected to add all possible variations of our domain. So www, non-dab, dab, dab, actp, actps. And eventually, if we have subdomains again, actp, actps, dab, dab, dab, non-dab, dab, dab. And if we have two, three subdomains, this could grow quite large quite soon. Is that really needed or not? For example, if I have a subdomain images.mydomain.com and I never used dab, dab, dab, dot images dot. And dab, dab, dab, it's redirected anyway to images. I should add that one too, or the HTTP version if I never used HTTP, if site is new and it started with HTTPS right away. If you're sure you wouldn't have any data there, then you don't need to add that. So that's kind of, I guess, makes it a bit harder to decide or something that you have to decide then. But if you're sure that there's no data there, then I wouldn't worry about it. It's mostly important for us that we can send you a message if we notice something is going wrong. So if you have everything on HTTPS, then if we notice that your HTTP version of your site is not redirecting properly and maybe causing errors that is affecting your HTTPS version, then it's useful for us to send you a message for the HTTP site. But if there's never any content there, if it's not like a higher level part of your site, then that's something you generally don't need to do. But for example, back links, which we can see. If someone links to a site as HTTP, what are you using, dab, dab, dab, when we use non-dab, dab, dab, dab, would we see those links if we don't add those version? Yes, you would still see them. So for links, we see them as being between two canonicals. That means if you're redirecting to your preferred version, then we would probably pick your preferred version as the canonical. And then all of the links would be listed there. So the alpha version should be mainly just for the raw messages. So eventually, I could create a second account, Google account, and add those on the web second. Exactly, yeah. Google console. Yeah, that's how I think I would do it in a case like that. Just have one account where you add everything, and one account where you kind of do your work, where you focus on the things that you really want to look at the information for. OK, I'm still related about this subdomains. Like these images, which I, each image is used there, but the images site itself images mydomain.com basically has no content. So only the main page of that subdomain, I'm redirecting to my main page. Do you think that's a good solution, or would you recommend something else, like having a placeholder page on that one? I think that's perfectly fine. You could even just have it return 404 if you want. So I was thinking that maybe someone somehow gets to it, and if I redirect, I get more traffic. Yeah, I'm sure. Why not? So that's good. Yeah, OK, thanks. All right, let's see what other questions we have here. How would you create a page about a specific informational topic so that this page ranks number one for all related queries to this topic? Would you focus on what would you focus on for my content and a user perspective? That seems like something everyone is trying to do. And if everyone is trying to rank for the same topic, then they can't all rank number one. So I don't really have a magic solution here for creating any kind of page that will automatically rank number one for all related queries to that topic. That's really something where you need to make sure that what you're providing is of the highest quality possible, is something that people are interested in, that you're passionate about, that you're not eligible about, and in the end, that often falls into place. The technical side of things here, for the most part, if you're using a common CMS system, then that will be just kind of knowledgeable about. And in the end, that's something where, for the most part, from a technical point of view, that's less of an issue. It's really a matter of making sure that what you're providing is fascinating for people, is something that they want to recommend to other people that they find is a good search result when they see it in the search results. Does the fact that my page links to a page on a different domain that's really relevant and ranks above my page, so the same topics on both pages, make my page more or less relevant to that topic. From Google's point of view, it doesn't really change how relevant your page is on that topic. So just because you're linking to a good page doesn't automatically make your page a good page as well. So that's something to kind of keep in mind there. Obviously for users, if you're linking to a really good page and that's useful information for your users, then that could make your page look better for them there. But from a search point of view, just because you're linking to another page doesn't necessarily make your page better. Google Freshness Algorithm, do I need to update the content on my pages every couple of months? Will I get a ranking boost by doing so? And no, you definitely don't need to do that. So if you have new and updated content, then by all means, feel free to go ahead and update those pages. But if nothing is changing on your pages, then just shuffling words around doesn't change anything at all for Google. John, may I ask a relative question to that topic? Sure. Well, I have one of those pages that will be updated regularly. They are about some tools that I have in the first place. And sometimes the change is just a small feature that was improved. And there is a bump in the version number. Would it be a problem if the changes in the page are that minor? Or should I have more information like a change log at the bottom of the page to make it clear that was probably a relevant change? But what you need to put there is not much, because it's just a feature that was enabled or improved. You don't need to make big changes when you make changes. I would just make whatever changes you need to make there. We will generally recognize that there was a change and pick that up for indexing as well. But it doesn't matter if it's a small change or a big change. It's just a change. And we focus on the new content then. OK. By the way, can I ask a related question regarding, well, not exactly a related question to this topic, but something that was mentioned before regarding migrating to HTTPS. I followed your tutorial, like the steps. And at least when I read it, there was a recommendation that you could migrate parts of the site at the time and see how it goes. But then I read an article in Barry's site. And he says that for small sites, maybe it's better to migrate all at once. Well, I don't know if my site is small or not. My site maps have been total like 23,000 pages. Well, what is a good criteria to follow? So I think that the most important part is to make sure that your HTTPS version works properly. And that's something that you can sometimes test when you do step-by-step migration. If you're really sure that your HTTP setup works properly, if you've run them in parallel for a while, then switching the whole thing over at once is also fine. So usually what happens when we recognize that the whole website is changing to HTTPS is we'll crawl a little bit faster to try to process all of these changes a little bit faster. So that's usually a good thing. So that's something where moving everything at once kind of helps your site because we can shuffle things over a little bit faster. Yeah, well, the reason why I'm migrating in parts at a time is because I need to review each type of page to see if there are some embedded elements that are not HTTPS. So I need to do one at once because it takes some work. Regarding that, is there any penalty or could it affect? If by accident I migrate a page and did not notice there was some embedded element that is not HTTPS and it would probably just show a notice in browsers that claim that the page is not totally secure, what is the point of view from the Google as a search engine, not the warnings in Chrome? Yeah, so from Google search side, what would happen there is we would recognize that there is insecure content on this page and we would be less likely to pick that URL as the canonical. So we have your HTTP version, your HTTPS version. If we recognize the HTTPS version isn't correct, then we will prefer the HTTP version. It doesn't mean that you'll see any change in rankings. You won't be penalized for that. It's really just a matter of do we pick this one or do we pick the other one? What if the HTTP one has 301 redirect already after the migration was done? It's still possible that we might pick the HTTP version. So we use a number of factors for the canonicalization. The redirects do play a big role there. But if we see really strong signs that the HTTP version is actually the one that should be indexed, we might still pick that one for indexing. With regards to this specific situation, I don't think you would see any change at all in search. We would probably just pick the HTTPS version because you have the redirects and I assume maybe you have the canonical setup and all of that as well. We'd probably pick the HTTPS version and we would show that normally in search. It wouldn't be ranked lower or anything like that. One thing I usually recommend for moving to HTTPS is that you have both versions of your site available in parallel without the redirect for a while so that you can test for issues like this. There is, I believe, an HTTP header that you can add to your page that reports insecure content so that you can have an endpoint on your website to monitor for insecure content in the time when everything is available in parallel and then you can go through that list and fix those pages. Oh, that's interesting. Why can I feel more about that, either? Do you remember the name? No, I don't know. I can add it to the post afterwards. OK. That's a good point. I have one more to join if possible. Sure. It's another friend of mine. I have a lot of friends. He has a domain site and at some point in the past, I don't know exactly years ago anyway, he also had a blog attached to a site. I'm going to stop the main blog.mydomain.com. And at some point, he ditched to a blog. He removed it and never touched it again. Then I now get to look a bit at his site. I accidentally found some backlinks to the blog subdomain. I added that subdomain in Google Webmaster Console. And I saw there maybe a few thousands, maybe 15,000 backlinks linking to that subdomain. And I was thinking on checking all, of course, if a content is lost and everything like that, when he deleted it, he deleted it entirely. And what I thought doing was to go through those links somehow. And where I see a link that it's still live, because some are not, although it's still appearing from Webmaster for some reason. And for the links, I see up to set redirect to a page on his current site. Would that make any sense or not? Or if there is already years since there was no content on those pages for a direct account too late to save any link power left in Web backlink? Yeah, I think if you're talking about years in between there, then probably that won't change anything. So it wouldn't help. Yeah, I don't think so. John, regarding Search Console, can I ask one question? Sure. So this is the one brief thing. I had to switch around the Hangouts for Search Console and the open one. So this is kind of the open Hangout. And the Search Console one is coming next Tuesday. But you're welcome to ask questions in the mic. OK, so can I ask it right now or I should be waiting for that? Sure. OK, John, actually, this is regarding the site move. So the blog is in subfolder. But I am just changing my e-commerce website. So half of the pages will be changed into new domain. But I don't have any plan to change blog also on new domain. In such case, we are planning that we will not be redirecting blog subfolder, but we will be redirecting one-to-one old page to new page in new domain. In such case, change of address should be done in Webmaster Tools or not. What will you recommend? No, if you're only moving a part of your site or if you're splitting your site or combining sites, then I would not use a change of address tool in Search Console. OK, thank you. John, I go a bit back on my previous question. I think that maybe I should still do it, if maybe not for link power from Google, at least people that may land on those pages and actually click on those links. So could it hurt? It won't hurt. It won't hurt. But you can check your logs to see if people are actually doing this. That's how I found about the first one. Checking for logs accidentally on a web log. Perfect. I mean, if you see people going there, then it's a shame to send them to a 404 page when you could be sending them to what they're looking for. OK, thanks. Can I ask one question? Sure. OK, see, the thing is that we've done a news website. And we are planning to, like it's an English news website. So we are planning to now serve our content to audience who understand Hindi. So should we have a separate category like xyz.com slash category slash Hindi and include all of our Hindi content in that? Or should we have a subdomain like hindi.xyz.com? What would you recommend? Both of those work. So for us, for languages, we don't need a structure within the website. So categories work, subdirectories work, subdomains work, URL parameters would work, anything where you have separate URLs. You don't need to kind of do a separate domain or a separate structure for that. Thank you. All right, let me run through some of the questions that were submitted. And I'll get back to answering more questions from you people here as well. There's the torch. Whoops. Somewhere there is a noise. How do you track how many users you send from Google to a site and then use that in some algorithms? Like you send such and such amount of users to a site and you would expect to see the signals and adjust rankings, if not. So this is a common misconception that I see every now and then that Google has a limit for each site and says, well, I will send you 1,000 visitors and that will be enough per day. And from our point of view, that wouldn't make sense. That's not something that we would do. So we don't know how many people are searching for topics that you have content on on your website. And we want to send everyone who's searching for something where your website is really the best result to your website, because that's what they're trying to do. So it wouldn't make sense for us to artificially limit the number of visitors that we would send to your website. Can Google Penguin affect or be a penalty on a specific keyword? For example, after one year, my site lost ranking for one specific keyword, but still ranking for a lot of other ones. Could it be Penguin? From my point of view, that wouldn't be the case. So it's not the sense that Google Penguin is trying to recognize keywords and block ranking for individual keywords, but rather it's trying to understand where web spam patterns take place and to kind of neutralize those. So if you're seeing a drop in one particular keyword, that would probably be something else. But there are lots of quality algorithms that might be applying there. It might be that your website is just very, I don't know, over-optimized in the sense that it's almost or even spammy for our algorithms in that point of view. So in such case, John, this will be page-specific or domain-specific? The general web spam algorithms try to be as granular as possible, but for the most part, we do look at your site overall. OK. If on page is the only result the user clicked on for a general query and you found what he was looking for, could you use that and treat this user's query as a navigational query? Because it was the only result the user picked and was satisfied and didn't search for more. So I guess the general question here is how do you determine if something is a navigational query or not? And this is something that is not so trivial like presented here in the question. So for the most part, I wouldn't worry about this as an SEO. If you're really interested in what makes a query navigational query and how information retrieval works, then I would recommend taking a look at the Google job site and seeing maybe if there's something where you can work at Google and learn more about all of these topics as well. If a user clicks on my number one ranking page and can find what he was looking for and doesn't come back to click on other page and doesn't reformulate the query, can you see this and say, oh, this is the most relevant result on a high quality website? This is something that we wouldn't do, as far as I know, on a per page basis anyway. So I couldn't imagine our algorithms taking this and saying, oh, this must be a high quality website and it must always rank for this query. So I don't think that would really play a role there. Why does Google Shopping allow a site to be listed multiple times through different market places? I have no idea about the policies on Google Shopping, so that's something I can't help with at all. I would recommend going to the, I think there is a Google Advertiser product forum where you can go to get more information from other people who are advertising on Google or using services like Google Shopping to see if this is something that would need to be reported or if it's OK or if it's kind of a gray area, I really have no idea. For domain that will have sequels, for example, a movie site, could the authority from the previous name affect the ranks of a new name? Scenario, could Google consider that due to more inlinks about the original movie, it causes the site not to rank as well for the new movie title due to an internal classification of the site which won't switch until inlinks about the new movie overpower the old? Not really sure what you're asking here specifically. One thing I do notice from time to time is if you have a recurring event, which could be a movie where you have Star Wars 1, 2, 3, 4, 5, or if you have a conference that takes place every year or every half year or something like that, then by reusing the same domain name, you're kind of building on the past. Whereas if you put the year into the domain name, and you say, oh, this was this conference 2015, 2016, 2017, then all of these conference websites will have to rank on their own. So if someone is searching for that conference in general, then you will be competing with the other ones. And it could be that someone searching this year might see the conference website from last year or from a couple of years ago because that was just a really good result back then. So that's something where I would recommend building on the existing website and kind of taking this year's content and moving it into an archive for next year and rebuilding the new content kind of on the base of the existing one. So with regards to this movie question and the links there, usually it wouldn't be a matter of links and a matter of Google knows this is the 2015 version and not the 2016 version. We would try to rank the content on those existing URLs based on the information that we have there. We're experiencing an issue with rich cards where search is using data from our site and linking to a different site. Is there anyone we can speak to about this and how to get it to link to us as we're the ones that are providing the information? Maybe you can send me a note on Google Plus directly and I could take a look at that to see what might be happening there. John, that question came from me. All right. How do we contact you about that? If you're here in the Hangout, you can just put it in a chat on the bottom. And I can double check what's happening there and get back to you about that. OK, thank you. With regards to the question about different magazines sharing common CMS and broken links, is there any way to find out where Google finds these broken URLs? How could we find the refer in the server logs? So you wouldn't find that in the server logs. You would usually find that in the 404 era section in Search Console, where you can click on the other tab or link from. And it'll show the source of that URL. Is a link from a page that received a lot of search traffic more powerful than from a link from a page that received no search traffic? And no, not necessarily. So just because one has traffic and the other one doesn't necessarily mean one is better than the other. We have a JavaScript library which parses the DOM and translates the page from English to French. On Fetch as Google, the page seems to be translated and appears in French. However, the search results still appear in English. Is there a difference in the way Fetch as Google works and how Google actually indexes the page concerning JavaScript frameworks? Is there a way to debug this? So debugging it is probably best done in the Fetch as Google in the rendered view to kind of make it possible that what it does is it shows you the rendered version of the page as we would see that. So that's usually the best place to debug this. I think there are two tricky aspects there that might be playing a role there. On the one hand, when we initially index a page, we generally focus on the HTML version of the page. And then in a second step, we do the rendering. So for example, if you use Fetch as Google and submit to indexing, then probably we will index the HTML version of the page, which would potentially be the version that's not translated. And then later, as we index it for good, we'll render that page and use the rendered version of the page for indexing. So that might be a disconnect that you might be seeing there with regards to the version that's shown in the search results. So I kind of give it a bit of time to see how it settles down and really double-check that we're really only indexing the wrong version of the page. If you're seeing that we're indexing the wrong version of the page and we're really not picking up the translations at all despite them working in Fetch as Google, then I would post in the Webmaster Help Forum and give us some sample URLs, some sample queries where you're seeing this happening. With regards to what you see in the search results, what might also happen is that when we render the page and we see that there's actually a different language version there that we index both of these pieces of content under that URL. So if you specifically search for the English version or the original version, we might still show that to you. Whereas if you search for the French text, then probably we'll be able to show you the French version of that page. I need to move a large 200,000-page site to HTTPS. The site has both versions live and both are indexed. I'm thinking of instead doing a straight-up permanent redirect to put canonicals and wait until Google moves all of the pages. Would that be a good strategy? That definitely works as well. I think for the long run you'll want to use redirects and probably also set up HTTPS so that users automatically go to the HTTPS version of the page. But if you're kind of cautious and you want to be sure that things are working well, then a canonical is a good way to let users access both versions but to encourage Google to access or index HTTPS version. May I have one more? Sure. It's something kind of strange. On a site, it is ranking really well for most related keywords. But it's a personal site of someone. But the problem is it is not really ranking for a person name itself. And it is ranking given below an empty Twitter account. They created with their name, but they never used it. So that account has zero posts, zero followers, and it ranks above their site. Any idea why something like that could happen and how could it be fixed? Because it's not like the site itself is penalized because it ranks for a lot of keywords and it attracts a lot of traffic from Google. So everything is cool, but it is just not ranking for the name itself. It may be because the domain name is the name itself and it may be seen somehow as a spam thing like having JohnMueller.com. And you would rank for Google Switzerland and Google Analyst and Seho, but you don't rank for JohnMueller. Yeah, I mean, I have that problem too because there are lots of other people that are called JohnMueller. So I mean, it's not a problem. It's good that everyone has some room in the search. I don't know. So that seems like something where maybe something is suboptimal on that website that we're not able to evaluate that properly. So I don't know what the best approach there would be. I would probably post in the help forum and see what other people can find. And if someone needs to escalate that, then we can take a look at that too. The problem is that he is not using for a name itself in his post, in his article, because it would be strange to say me, JohnMueller. But you would have that as an author tag or something like that on the page. That's, yeah. The name itself is not mentioned at all on the site? Or fairly late. But no, not really. It's not really mentioned. It appears in the links, since what domain is like that. It appears as an anchor text for a lot of the backlinks. But on the site itself, yeah, it's not really appearing. Yeah. Then that would probably be the first thing that I would do. So it should be somehow artificially added. Well, not artificially added. I mean, if it's your site, if you're creating the content, then you have the copyright by JohnMueller or something like that on the bottom. Or written by JohnMueller in the site. And that should be enough? Or could? We can pick up, yeah. I mean, if it's not mentioned at all on the site, then that's really hard for us to say, well, we guess that this is actually the site about this. But we don't see any mention about it. So that's a lot harder. OK, so a footer note, an author, target, hand of his post, something like that. Thank you. Google announced that starting January 10, intrusive interstitials may not rank highly on mobile and will be subject to ranking penalties. They listed as exceptions in specific cases, legal obligations, cookie, and age verification. My site closes every weekend for the Sabbath using a modal. We took this action exclusively due to good faith religious observation. Is there no way, shape, or form in marketing? So it's not a marketing thing. Will this cause any problems? So what I would recommend in a case like this is to return a 503 HTTP result code, which tells us that your site is not available at the moment and that we should try it again later. Users can still see the normal interstitial or whatever you're showing in a case like this. But search engines will recognize that the content that you show there is not what you want to have reflected with your site. That's not the content that you want to have indexed. And we'll try again later. And you can specify how much later with the header as well. So you could say try again in one hour or try again in 24 hours, and we'll try to follow that. That's something where I think the primary effect of a site not doing that would be within search itself and that we would index the content of this interstitial instead of your normal website content. So from my point of view, it's not so much a matter of us recognizing this as an interstitial and saying, oh, you're showing an interstitial, but rather us recognizing this as your page's content and indexing your content like that instead of the content that you actually have. So the 503 result code covers both of those cases and makes sure that we don't index that content and that we wouldn't use that as a basis for recognizing if this is an intrusive interstitial or not. Additionally, we talked with the team that is doing this specifically with regards to this kind of situation. And for the most part, we would treat these type of interstitials the same as we would any other kind of legal or kind of policy interstitial. So this wouldn't be seen as an ad interstitial, for example. About interstitials, what we did on a site, and I wonder confirmation if it's good or not, we replaced the interstitial with some of the first visitor visit. Instead of showing an interstitial, we insert in the web page itself, so not opening differently, on the visible part of the site for interstitial. But it is practically inserted in the web page. If the user scrolls down, he gets the content. He doesn't have to click on the eggs. He has to scroll down. Once he made the first click, that part of the site disappears, and the menu goes on top and everything. Is that OK or not? We would probably still recognize that as a type of interstitial. Even if it's on the page itself? Yes. Because for the user, it's hard to differentiate. Is this an interstitial or is this just a big block of text that I can go through in a different way? So that's something. A big header. I mean, for the user, it's essentially the same thing. You're blocking the normal content that they would see. So when we look at it from our side, we're saying users should go to this website because this website has good information for this topic. And when they go there, they don't see that information. They see this big advertisement instead. So that's kind of the situation that we're trying to protect users from and to make sure that we give them a good experience. When we recommend a site, we want to know that they can actually see that content as quickly as possible. OK, so it didn't help us. We could just return it to a normal interstitial. Probably, I mean, or take it out completely, yeah. But if it's first click free, so person arrives from Google, sees all the page, and when he clicks on the first link, we're interstitial loads. Yeah, that's an option that you could do. So if within your website, you showed this interstitial, that's totally up to you. OK, thanks. Let's see. How can a large site selling auto parts get detailed information in search for products and having many variants? By using a canonical tag on the most popular product version, the other is seen lost. Yes, that's definitely the case. If you use a canonical tag, then we will combine those pages and just index one of those versions. So one thing you can do is make sure that the main page that is chosen as a canonical from your point of view actually has a listing of all of the different variants as well. So that's something you might need to kind of double check the way you have your page set up and maybe tweak it a little bit so that all of the variants are actually listed on that page as well. Why an index status? It's not clear which other URLs are indexed then from the site map file. We tried to find out which ones are indexed, but Search Console doesn't provide this information. Yes, that's true. In Search Console, we give you information on whether or not on how many URLs within a site map are indexed, but not which ones specifically. And for the most part, that's something you don't need to worry about. It's completely normal for us not to index all URLs that we find. And that's not something that you'd need to artificially tweak. The one thing I would watch out for is, of course, if something is really important for your website that that's actually indexed, but you notice that fairly quickly because these are the pages that should be getting their traffic. My niche is removing Google manual actions. I have not seen any Google manual action for unnatural inbound links since September 23. Does this still happen since Penguin? Yes, of course it still happens, yes. But the Web Spam team is still active and taking action on unnatural links in both directions. So that's not something we've stopped doing since Penguin. In which XML site map line are the hreflang errors being generated? Any specific reason to no return errors? So I don't think we have any line numbers with regards to site maps unless there are actually syntax errors in the site map file. So if it's a matter of the logical information being wrong in the site map file, then we wouldn't generally list the line number there. But if the syntax within the site map file is broken, then when submitting a site map file, we'll show you that information. Any special considerations for cross-domain canonical? For example, same product across different retail sites? You can definitely use a rel canonical across domains. That's perfectly reasonable. The situation you described, like the same product in different retail stores, and you want one of these as the main page for that product, that's completely normal rel canonical use case. We have an e-commerce site which launched two months ago, Search Console shows Christmas gift for Chef. We should be ranked number 16, but our website is nowhere to be seen. If I do a site query, then my website doesn't show up anywhere. Is Google penalizing me for something? Not really sure what the issue is. So with regards to Search Console, Search Analytics, the ranking that we show there, these are based on the actual rankings that were shown. However, that also means that this can include personalization. So if you search for this query and we show you your website because we think, well, this is your website, you probably want to see it, then we will include that in the average top ranking in Search Console as well. So that's something where you need to also look at the number of impressions that are shown in Search Console to kind of make a guess at whether or not this is something that was shown one off for a smaller number of users or whether this is something that everyone would be seeing and maybe just you when you're double checking, you don't see it at the moment. So that's one thing I'd watch out for there, the impressions and not just the ranking number. So John, in Search Console, there is many filters when we can select any device or any country. Why not there should be one more function like personalized impression clicks or generic because most of the people are getting confused seeing this. I imagine a lot of your search results are personalized. So that's something that happens automatically anyway. So that probably wouldn't help too much in that regard. But I find it something which is kind of tricky in the sense that, on the one hand, I understand that this is something you might not want to see. But on the other hand, we want to make sure that we're actually providing the information that we did actually see in the search results. So even if this is something that was one off, then that's something we still want to show there. Because sometimes it's also the case that your website is extremely popular for one keyword for a really short period of time, maybe one day or even less than a day. And then afterwards, it's not popular for that at all. But we still want to show that to you in Search Analytics even though you can't reproduce it after a couple of days. How do I jump into the session? So the best way to jump into the session is to click on the link that I post when we start the Hangouts. One thing to keep in mind is there's a limit to the number of people that can join the Hangout Live. So if you're interested in joining one of the future Hangouts, make sure you're out there early and that you can click on that link as early as possible. Let's see, my site has been de-indexed. We have over 5,000 unique, good quality articles authored by Writers World Drive, built over three years. I've not done any link building or anything crazy. I want to know the reason for this and what I need to do to get indexed again. So that's something where probably I can't just see if there's something specific happening there. What I would recommend doing is posting in the Webmaster Help Forum and get double checking with other people there. So let's see, the rest of the question goes on. We've got a manual action notice. We don't really know the reason for the manual, for the mobile redirection. So yeah, so I would guess that this might be related to that. We've seen some ad networks place ads that don't just show an ad on your page, but actually redirect a certain number of users from your site to some other content. So for example, if users in, I don't know, Switzerland on a mobile phone access your website, then they're all redirected to someone else's website instead of your website. And from our point of view, that's something that is a bad user experience. That's a sneaky type of redirect, but something that would be against our Webmaster guidelines where we would potentially take manual action to remove a site so that it doesn't cause this kind of confusion for users. So that's something it sounds like you might have cleaned that up in the meantime. But if you're still kind of worried about that, if you have questions about this specific case, I would definitely go to the Webmaster Help forums to double check with people. And when you post there, make sure that you include the history, what you posted here in the question, kind of the timeline, which dates you saw, what, which manual actions you had, what your site's URL is, so that people can help you based on all of the information that you have not have to try to guess at what might have happened here. Can I ask a question regarding deceptive content? Well, actually, I have one report. I never had any reports at all for all the history about security matters. So this is really because it was just a specific page. I even tried to share a post in Google Plus with you. I don't know if you got it, or maybe you did not have time. I just flagged it as fixed because I could not see anything in the page. And also on Chrome, there is a link to report false alarms. I don't remember the exact name. And after a few hours, the report was considered cleared. But I have a friend that is trying to help a user on Blogger that has a custom domain, but it's on Blogger. And he has many pages being reported as deceptive content. The only thing I see there is basically Adsense Ads, which is also something that I have, although I only had one page reported. So how can we know exactly what is being flagged there? If I may also submit a suggestion, wouldn't it be great if a search console could show an actual capture and say, oh, this is considered deceptive? Yeah, kind of a screenshot or something like that. That might be a trick. Yeah. So I'm not sure what you mean by deceptive. Is this flagged as phishing? No, it's just deceptive content, which is the message that shows there. I can even review it there because it doesn't disappear, just gets flagged. Is it something that in Chrome has an interstitial in between? The exact name that appears is Social Engineering Content Detected. OK. What's this? I think there's a blog post about that specifically. I'm not 100% sure if I'm mixing things up. But one of the things that they did, for example, is flag bad download buttons. So if you have a site that has downloads and your download links actually lead to other kind of downloads, then that could be misleading. Or your download links lead to a download of a PDF viewer that has embedded five toolbars and all of the kind of crazy stuff that's included with download sometimes, then that's something that could be flagged as being deceptive. Or if you have an ad that's made to look like a browser window that basically says, oh, your Flash Player is outdated, you need to install a new Flash Player click here, and then it takes you to some crazy Flash Player type site to download, then that could also be flagged like that. But I really double-check the blog post from the security people that they did about this to make sure that I'm not mixing things up. Yeah, so unless there is something that actually shows what is being considered social engineering, we cannot have an idea. It's very tricky when it comes through an ad network, in the sense that we might see that from time to time and be able to flag that. But if you never see it from your side, then that's really hard. It's kind of similar to the mobile redirect version from before, where the ad network might be redirecting users in a different country to a spammy site, and you might not see that. So that's something where kind of figuring out which ad networks you can trust is really vital for a website. Is Google AdSense trustworthy? I need to talk with the other side. For Google AdSense, I know they work really hard to prevent these type of ads. But if you see something like that on any site, if it's your own or a different one, you can also report that to the AdSense people. And I know they take these reports very seriously. OK, I just keep my suggestion to please send screenshots. So we do not have to wonder. Yeah, that's good. Thanks. All right, I'm not a tech person. I have a business and marketing background. I am an entrepreneur. Please help clarify the following. I was informed by the agency who set up my initial SEO on Search Console that besides the site map, there were submitting 10 links on the Fetch as Google menu. And they told me I should continue doing so every month, 10 links per month. I tried last week and was giving an OK sign and I got a yellow redirect sign. What does this mean and how can I solve it? So for the most part, you don't need to use Fetch as Google to submit your existing pages. That's not something you'd need to do in the sense that we automatically find those pages for the most part and index them automatically. You don't need to manually do anything there on your end, especially not on a regular basis. If there's something very urgent that you need to have updated in Search as quickly as possible, then this is a great feature to use. But for normal changes on your website, for web pages that don't change at all, you don't need to do that. With regards to the yellow redirect sign, that sounds like we saw a redirect instead of the normal page itself, which can be OK if it's a redirect to another page within your website. It can be problematic if it's a redirect to a different website, because that would be a sign that something on your website isn't set up quite correctly. However, if it's just a yellow sign and maybe in temporary unavailable or something like that as an error, then that might just be a sign that that specific request that you gave didn't go through anyway and you'd want to retry that again if you need to do that. So it depends on what specific error you saw there. If it's just temporarily unavailable, that can be completely normal. I was using a live chat widget on my site, and I started receiving blocked resources messages in Search Console. I removed the widget in the meantime, but I want to install chat widgets again. Do I need to worry about blocked resources? Probably you can ignore that in this specific case. The blocked resources is particularly important for us if this is something on your website that you want to show up in Search for. So if you want to rank for live chat and your website, then we probably won't be able to find that. However, if you just want to rank for the normal content on the rest of your website, then having that chat widget blocked as a blocked resource is perfectly fine. It's not something you need to worry about. Then my site is bilingual. Do I need to confirm if the agency submitted the sitemap for both languages? Probably not. So probably we can just recognize that automatically and crawl and index those pages. The important part is that the different language versions have different URLs, so different addresses, so that we can access them separately. So we can look at the German version, the French version, the English version, and we don't get automatically redirected to a different version instead. But if you've set that up normally, then a sitemap file is great to help us learn about that. But it's not a requirement. It's not something where your site will see any advantage if it's linked normally internally as well. Do I have time for one more question? Sure. OK, it's this site which mainly offers a navigation search box. It lists a series of providers where users can select the country and what name and use the search box. Of course, that is not Google-friendly, so I suggest you to add an alternative way to find those pages via static links. And we went with something like listing continents, clicking on a continent, list countries, clicking on a country, list cities, clicking on a city, list the providers inside the city. And all these are on static links. So pretty much some site maps, organized site maps, moved on HTML on site itself. And I think that's a good way for Google to find them. Now my question is, when it comes to provider in a city, Valice can grow quite long. At what point do you think we should paginate it? So we don't have thousands of links on one page. But on the other side, we don't want to have thousands of pages. I would do that mostly based on usability. So not so much to worry about the SEO side, but really about the user. From usability, it doesn't really matter because 99% of users will use a big search box. So we'll not even use those links. Those links are specific for SEO, for Google, for our search engines. I don't know. I don't think we'd have any hard limit where we'd say this is too much, this is not enough. But one thing I'd probably try to do there is keep things below a reasonable amount, maybe a couple thousand links, and keep it like that. But I'd recommend it was something like less than 500. If I cared, I could go a bit higher. OK, I mean, it's tricky because essentially all of these links are only there for Google anyway. So I would try to find a way to give some hierarchical structure for that so that maybe if you can go to provide some kind of a category page and interlink the individual pages themselves. So if you go to one article, then you find a list of related articles and you can follow the net of pages across your website like that. So that you don't see it. It's kind of a yellow pages thing linking to Chrome. So not articles themselves. So if you go with country, continent, country, state, city, but when you come to city, you may have over 1,000 from one city. Yeah, I don't know what the best advisor would be. My worry would be more with regards to making sure you're actually providing value on all of these pages that these are not just automatically generated pages based on a directory listing that you bought. That would be my bigger worry than whether or not we actually crawl through all of these. OK, so related to this on those pages, if initially there is very thin content, I mean we have address, we have things like that, and we offer the possibility to people to add more content themselves on their listings. But initially, we created them. They don't have set content. People may add it or may not add it. Would it be better if, for example, till someone adds some content, we use the no index on that page? That might be a reasonable approach. I mean that might be an idea where you say, well, I want to make sure that the content that we do have index is high quality content. And you do that by automatically or selectively doing no index on those pages. That's an option, yeah. But if we don't do what is the risk for penalty, for film content, many pages with not too much content? Probably not a manual action in that sense. But it might be that our algorithms look at your website overall and say, well, there's lots of pages here, but not really anything really fantastic. OK, thank you. All right, let me see. I think there are two questions that were submitted and a handful more in the chat. And it looks like I still have a bit of time in the room, so I'll just try to power through these. Someone bought a domain and found it wasn't indexed in Google. The reason why it wasn't indexed is because of a DMCA complaint. What can we do? So this is something where, from what I understand, within the DMCA process, you can file a counterclaim and have that reviewed. So you can do that if you've bought this domain now and do that, I believe, through the help center. There's kind of this wizard flow that you can go through for something like that. Our SEO team in the UK have noticed that since the September update to search, the disavow tool works faster when recovering new clients with issue news. In webmaster tools, when looking at sites that link to a website, it would be good to see a list of sites removed as a quick reference. Is this possible in the way Google handles the process to output this to the table? So no, not particularly. In the sense that in Search Console, in the links to a site, we also include links that have a no follow. So links that we've seen, even if they don't pass any value at all, which would be similar to a link that would be processed with the disavow tool. So I'm not sure, kind of, to the first part of the question whether the disavow tool actually works faster or not. The disavow tool relies on us recrawling and remixing those pages. And that's something that there's kind of a natural cap on that, because we can't, like, melt all the servers on the internet by crawling them too hard. So on the one hand, I'd say there's probably nothing specific happening to make the disavow tool faster. And on the other hand, showing those links differently in Search Console would generally not be something that we would do. What I'd recommend doing there is maybe downloading those links and setting up a script in Google Spreadsheets to match those with your disavow file so that you can filter them on your side. All right, a handful also here in the chat. We have many reviews about our company on third-party service, like TrustPilot. But according to Google policy, we can't use those reviews in an aggregate form to display star ratings in the search results. Is there any way to take advantage of our company reviews with structured data markup without violating the Google policy? So we just did a markup with someone from the structured data team, I think, on Tuesday. And I believe they talked about this as well there. So I double-checked the video recording for that. From my point of view, the tricky part here is that if these are reviews for your company, then they should be placed on your website specific to your company. So they shouldn't be repeated across all of the pages on your website because that would be kind of misleading to say, well, this specific page is rated four or five star because the company is rated like this, because actually you're rating the company. So that's something to kind of watch out for there. I posted the link to that one you mentioned. Thanks. But again, I really double-checked the hangout with the guy from the structured data team because he knows what to watch out for, and I might be simplifying it a little bit too much. But if you're using a widget for third-party reviews for individual products, then placing those on those product pages is perfectly fine. We have an issue in search where the rich card is pulling from our site. I think you mentioned this. OK, and here's the details. So I'll copy the details out and double-check to see what might be happening there on our side. And if there's something specific that you'd need to do on your side, then I'll try to get back to you. I can't guarantee that I can get back to you on things like this because sometimes this is just the way search works, and there's nothing really actionable that I can give back to you there. But I'll double-check with the team. All right, I think we have them all covered. Wow, OK, fantastic. So I guess with that, let's take a break here. Thank you all for joining. Thank you for all of the questions that were submitted and all of the comments along the way. Great hangout again, and maybe I'll see you in one of the future ones. Or otherwise, I'll wish you great holidays in the meantime. Next one is on Wednesday. On Tuesday. Yeah. OK, thank you all for hosting us. Thanks. Thanks for coming. I don't know anything about Mihai because I haven't seen him in months now. And she was here all the time. Yeah, maybe he's just busy. Sometimes that's a good sign. OK, bye. Thank you a lot again. Bye, everyone. Have a great weekend.