 All right. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland and part of what we do are these Webmaster Hangouts, where publishers, webmasters, can join in and ask any search webmaster-related questions that they might have. Bunch of questions were submitted already. But if you're kind of new to these Hangouts and want to get the first questions in, feel free to jump on in and ask now. OK. Well, I was going to allow the new guy to ask, since he seemed to want to, but since nobody else does, if I'm just real quick. I just wanted to check and see. Last week, we had the discussion about the sites outranking our site with content that they stole from us. I sent a couple of examples on Google+, and just wanted to find out if you had gotten a chance to take a quick look at that. I don't have anything specific that I can get back to you on that, but I did pass that on to the team to kind of check out and see if there's anything they need to do that. I appreciate it. Thank you. All right. Otherwise, I'll jump in with the questions that were submitted already. And if anything comes up in between, feel free to jump on in. If you're watching this now, feel free to jump in and join the Hangout in person as well if you'd like. Sometimes there are a bit of technical problems with joining in, so you might just need to refresh that Google Plus link and double check which one's listed there. All right. If you have an affiliate website along with other affiliates, you all show the store address details of the company, branches which you're affiliated with, then does Google get confused with regards to ranking you for things in that particular location, given that a number of websites companies show the same address? Or can Google tell the difference and rank the sites on their merits individually? I'm not completely sure what kind of affiliate website you'd have there where you'd list kind of the original source of the product. So if you have some examples around that, feel free to drop them in the thread and we can take a look at that. Or perhaps this is something that's worth starting a thread in the help forum about to get feedback from other people as well. So I'm not really quite sure what the kind of setup there would be. Does Panda run once a month? And if you improve a section on your website, then how long before Panda score for that section will change, start to see positive or negative benefits from it. So these algorithms tend to run regularly. We don't have a fixed timeframe where we say it runs every Monday or every first Tuesday of the month, but rather this is something that's continuously updated as we re-crawl and re-index the web. So there is no fixed time when it just runs or when you would see a step change in the way that we evaluate the quality of your website overall. And these are things that for a large part also depend on your website in general. So when we re-crawl your website, we have to take a look at all of the individual pages and collect all of the signals across the website. And that's something that doesn't happen from one day to the next. That can take quite a bit of time. How long does it take for schema markup to show up and how can I tell if it's marked up incorrectly even though the testing tool doesn't tell you it's wrong? So this kind of goes into the previous question as well in the sense that crawling and indexing takes a bit of time and some pages get crawled and re-indexed within days. Others take weeks, some pages even take months. So if you make significant changes across your website overall, then that's something that takes a different amount of time depending on which page is where. And in general, this is something that happens on a kind of a step-by-step basis in the sense that when we see the markup on one page, we'll process that and reuse that. We won't wait for the markup to be visible on all of the other pages of your website. So you don't have to wait for that last page to be re-processed. This is something that usually happens fairly gradually. And if you look at the graphs in Search Console, you'll see that there's often kind of a steep climb in the beginning and then it flattens out and kind of catches up a little bit slower. And that's our normal crawling and indexing. With regards to how you can tell if it's marked up correctly or not. So for structured data, we have essentially the three steps that you need to take care of. On the one hand, it has to be technically correct. So it has to pass the testing tool, which sounds like you've done that. On other hand, it has to be correct from a policy point of view. So you have to use the right markup in the right places. For example, if you have a car that you're selling, then you wouldn't mark that up with the recipe markup. These are kind of things where testing tool might say it's okay, but if you look at that manually or if you kind of try to look at that logically, then that doesn't make sense. That's the wrong markup in the wrong place. And the third one, which is the most tricky, I guess, is that the site itself has to be seen as something that is considered to be reasonably high quality so that we can trust the markup that you have on the site. So that's something, essentially, those are the three steps that we look for. One thing you can do to test for that kind of middle ground with regards to whether or not your site is high quality or not is to do a search query for your site to look at the pages that were indexed. And if the site query shows rich snippets in the search results, but the normal searches don't show it, then usually that's a sign that from a quality point of view, we don't quite trust your site perfectly yet. John, can I ask a question about Search Console? Sure. So there was the data issue on August 14th and 15th. Was that at all related to the delay in the Search Console being backed like six, seven days at one point, or it was unrelated to that? I think that's unrelated. So I'm not quite sure which one you're referring to. So but I don't think we kind of called out that delay. And I think one of the other questions is like, do you plan on keeping this delay forever? And that's not the case. That was really something that was just broken on our side. And that took a bit of time to kind of get all of the pipelines back into working order again and catch up with the data. So usually what happens is the data gets stuck at some point, and then it gradually kind of backfills again. And you'll see kind of the normal latency that is always there. Will that ever be less than two days, or it's even with the new beta? Not with the new beta. It's something we've been looking into to kind of see what we can do to make that a little bit faster. But it's always a bit tricky because there are so many things that have to be done before the data is actually visible in Search Console. So that's, I don't know. It's possible that we can get some sources of data a little bit faster. But across the board, we probably at least have a one day delay in there to make sure that we can process all of the data on time. OK, does Google take into account how a page or website performs in different browsers for Panda or rankings, et cetera? As far as I know, we don't do anything specific in that regard. So it's not that we say, oh, we'll look at this page in Firefox or Internet Explorer and Chrome and look at them individually. Essentially, our algorithms look at the page the way that they're rendered for Googlebot. And that's kind of what we use as a basis. The one thing that's kind of separate from this is, of course, the mobile side in that we do look at the desktop view and the mobile view separately. But we don't look into different browser types to see what different browser types would see. On our branch finder, the internal links to the relevant store information is behind dropdowns, depending on what region they exist in. Should we also be having the branch finder page, which shows all branches visible as opposed to hidden behind dropdowns to get maximum SEO value? You don't need to do that. So if those links are on those pages and they're just behind a tab that's kind of loaded by default but not visible by default, that's perfectly fine. We can use that for crawling just the same. With regards to main navigation, are some types better than others? Or does it really not matter as long as it's usable? As far as I've seen, it doesn't matter. So looking back at the number of sites I've looked at from the help forums to larger sites that have come to us, I don't think I've ever run across an issue where the normal site navigation has caused any kind of SEO problems. Kind of outside of the general issue of you have a terrible URL structure and we can't crawl you without finding five bazillion URLs. But that's kind of like the normal navigation structure if you have it, kind of a mega menu style, or you have dropdowns, those kind of things. That's really more up to you and more on that ability than specifically. I read that content silos are a good way of getting content to rank. What do you think about that technique? I'm not particularly sure what you mean with content silos. I assume this is something like creating niche sites or something general, generally like that where you create small websites that are focused on one specific topic. From my point of view, that's usually something we recommend against just because it's a lot easier for you to maintain things on one website where things are interconnected on one website. And it's a lot easier for you to build up that almost brand awareness, I guess, with regards to your one website compared to having all of these other different individual websites that are connected to your main website but not really on your main website. So for the most part, I'd recommend trying to focus on a smaller number of websites and keeping everything together and tying everything together in a reasonable way so that when users find your content, they can easily move along that path to actually converting into whatever you want them to do. If you want them to buy something or sign up for something, make that as easy as possible so that when they go back to your content, they know what to find and what to do. If you wanted a page to rank for a particular phrase or keywords, can you give us some useful tips to get that page to rank? That's just kind of a very broad question. I don't really have like one simple trick to make your website rank number one. There are lots of things that you can watch out for. So I don't really have anything insightful I can add there without going off on a monologue for hours. I know that content behind tabs is deemed less important, but does that also apply to links behind tabs? We talked about that briefly before. So links behind tabs are perfectly fine. We've been implementing schema markup on our e-commerce site for product categories. I think this question kind of goes into like a general theme of which structure data markup should we use for our specific type of product that we're providing, product of service. What I would do in a case like this, if you're unsure which markup to use, I would go to the webmaster help form and get feedback from other people. So in general, when we look at a website that has structured data on it, our algorithms are not going to kind of like be nitpicky and say, oh, you say this is a thing, but actually this is a service and you have the markup wrong in that regard. It's more a matter of, oh, you're using the completely wrong type of markup for something. For example, like I mentioned, you're using recipe markup and you're trying to sell tractors. There's no way these two could be misunderstood and they're very different. They have very different types of required fields. So that's the type of thing we're watching out for. These kind of more subtle differences where it's like, it's not really a thing I'm offering a service. Those are the things where I would on the one hand get feedback from other people to make sure that you're kind of in this discussion-worthy zone and on the other hand, I wouldn't worry too much about this. That's something where even when we look at that manually, we're not going to be nitpicking, say, oh, you use, you called it this, but it's actually like a subtype of this, which is this. So that's kind of what I deem for there. Why does Google rank keywords in singular and plural forms differently? That's a great question. That comes up every now and then. One thing from our side kind of more general there is that we don't have algorithms that are trying to linguistically analyze the queries that get sent in, but rather we try to look at those words just as words and try to figure out where does this kind of fit in with all of the words that we know. So our algorithms are not going to say, oh, keywords is related to keyword, except that it has an S, so it has to be a page that has multiple keywords on it. It's more a matter of our algorithm saying, oh, I've seen keyword and keywords used kind of interchangeably in the past. Therefore, maybe there's something like synonyms. Maybe I can kind of treat these the same. And that's where you might see some things kind of folded together, where we say the plural is kind of the same as a singular. We should treat them in the same way or in a similar way. And sometimes you'll see things kind of split apart where we don't really have recognized this connection between the plural and the singular form. So that's something where there's nobody manually behind our algorithms kind of going through all of the queries that are submitted and saying, oh, this word matches this one and this one matches this one, but rather it's something that our algorithms try to learn by themselves over time. I'd like to know if there's any negative impact on site performance when using tracking pixels. So on site performance, that's something you can obviously test using the various speed testing tools out there. You can use Chrome. Chrome has some really fancy speed tools built in. The Lighthouse tests that are built in there. There's webpage test.org, which does fancy waterfall diagram. So lots of tools are out there that help you to kind of double check to see how your site performance is affected from tracking pixels. Sometimes, depending on the way that they're implemented, they can slow things down. Sometimes they're implemented in a way where they're kind of lazy loaded in the end and they don't really cause any issues with regards to the page actually loading or the visible content loading. So that's kind of something you can double check on your side. With regards to SEO from our side, that's pretty much irrelevant. So sometimes these do slow the pages down a little bit, but we generally differentiate when it comes to speed between pages that are reasonably okay with loading and pages that are really, really slow with loading. And I don't think I've ever seen a case where one tracking pixel moves aside from kind of being in this reasonable range to being completely unreasonable. So usually it's a matter of, I don't know, a couple hundred milliseconds at most where users, of course, if you're blocking the visibility of your content, they might notice and react to that, but from the search side, we're not going to worry too much about that. Our schema markup shows no errors, but we don't see it in search results. We kind of talked about this before. So those three things, I kind of watch out for that. We're moving to HTTPS for one of our major sites, which sits on www and has an m.subdomain. We've bought the SSL certificates for these versions and we're wondering whether we need to buy a certificate for the root domain. So example.com, for example. Is that necessary? Will that cause issues? Is it better to have just in case? So this is something you can probably double check in your server logs to see where people are going at the moment on the HTTP version of your site. Whoops, okay. If nobody's going to the root of your site at the moment, then chances are they're not going to go to the root of your site when you've moved to HTTPS. Personally, I suspect that it probably makes sense to just have that covered. And depending on how you have things set up, you can probably get the certificate for free and setting it up when you're already setting up the other hosts is probably pretty trivial. So from my point of view, I would just get that fixed completely. So you don't have to worry about the details there. User agent, asterix, and then a command and robots text is that also applicable to the adspot, Google, or do we have to mention that separately? I don't know for sure. I believe it needs to be called out separately, but you can double check our documentation. That should be listed there. Suddenly Webmaster Tools traffic for HTTPS was redirected to HTTP, even though my site is configured to do the reverse. What might be happening there? I don't know what might be happening there. So I probably post in the Webmaster Help Forum about that, mention your site so that someone can take a look. Maybe there's something that you're kind of missing there. Does Google Assistant use data from mobile or from the desktop version of Google or is it a new mix of both? So Google Assistant, I believe you mean kind of the featured snippets when we call those out with voice queries. They're taken from the normal search results. So at the moment, the normal search results are based on the desktop version of the pages. Over time, we'll move to the mobile index and we'll use the mobile version. Let's see. Do you know more about the future of the Google AMP project? So it's just AMP project. It's not Google specific. We added AMP to two pages of our site. The product pages are not doing any better. Can you tell us the future? I can't tell you the future, but... So that's an easy question to answer. With regards to AMP, the thing to keep in mind is it doesn't affect your rankings. It's not that your site is going to rank number one just by using AMP or just by using HTTPS. So it does make things a lot faster. But it doesn't significantly change your rankings in that sense. So if you add AMP to one of your websites and it was ranking on page 10, then chances are it's still going to rank on page 10 in the search results. It's not going to jump up to the first page. So that's kind of one thing to keep in mind with regards to expectations. Obviously, if users go to your AMP page and it's significantly faster and they can browse your site and read things about the content that you're providing in a way that's a lot easier for them to consume, then chances are they'll respond to that. And though, I don't know, do more on your website. So that's kind of the indirect effect there. The AMP does have a roadmap in general, obviously. Also, anything new with the mobile index timelines or anything, progress, any more clarity you guys have out there? I don't have any big updates there at the moment. So the team is currently still doing various types of evaluation to kind of double check the classifiers that we have to make sure we can recognize when a site is ready for mobile first indexing. And that's something that's taking a bit of time. But in general, things are looking pretty good and not as kind of crazy as I first suspected. So hopefully it'll be kind of a smooth transition for most sites. Bit of an open-ended question. What if the homepage of a site suddenly drops off all rankings apart from brand search? Is it likely some change in the algorithm has driven this? I don't know. There can be lots of things that kind of affect the rankings of a site. I think you're kind of digging into the right type of details already. So you're looking at the homepage, you're separating out kind of the branded, non-branded queries, and all of that gives you a little bit more insight into what might be happening. So instead of just taking the aggregate count of the search impressions or search clicks that you're getting drilling down into specific details makes a lot of sense. And makes it a little bit easier to figure out what might be happening. If you see across the board for especially non-branded queries, kind of this drop in rankings, then that's often a sign that our algorithm is just not as happy with your site as they used to be. So that's something where you might want to kind of take a step back and think about like, what could you be doing to significantly take your site up to the next level from a quality point of view? Hey John, somewhat related to that, on our site our core content doesn't really change over time because of what it's about. So what we have done over the past few years is we added like a blog section that should try to bring some current events and fresh content into the site. It's gotten to the point where we have way more blog articles than we do core content. And was wondering is it possible that if the algorithm doesn't particularly like our blog articles as much that it could affect our ranking and quality score on the core content? Theoretically that's possible. I mean, it's kind of like we look at your website overall and if there's this big chunk of content here or this big chunk kind of important wise of your content there that looks really iffy then that kind of reflects across the overall picture of your website. But I don't know in your case if it's really a situation that your blog is really terrible and you have guests that would be at the same time. I wouldn't necessarily say it's terrible. I mean, I would say that some of the articles probably don't meet the same quality standard as our core content. In that case, I mean, would you recommend going back through articles that we posted and if there's ones that we don't necessarily think are great articles that we just take them away and delete them? I think that's always an option. Yeah, that's something that I've seen sites do that across the board, not specifically for blogs but kind of for content in general where they would regularly go through all of their content and see, well, this content doesn't get any clicks or like everyone who goes there kind of runs off screaming then maybe that's something where you can collect some metrics and say, well, everything that's below this threshold will make a decision whether or not to significantly improve it or just get rid of it. And that's something we do all the time as well. So in the help center in particular I've seen our tech writers go through the content and say, well, five people last month looked at this page what do we want to do about it? It's like not worth keeping this page anymore. It's kind of like, well, you have it documented but maybe it's not that important after all maybe we need to do something about this. So that's kind of the thing where I kind of across the board recommend taking this critical view of your site and saying, well, this is still relevant content. This is not so relevant content. So that's interesting because I think Lyle is basically saying that he has a lot more content on his blog, on his website than the core content on the website. And I think that's the case for most sites even Google probably has a lot more blog posts than they have FAQ pages or whatever about Google search works. And I guess Lyle's question was more about does having a thousand blog posts and only 10 core content pages have some type of weird impact on how Google understands the core content versus maybe the blog stuff. I think Google's probably good at understanding what's blog posts, even if it's in the same domain than what the core content is. But I don't know if there's something with algorithm that identifies this being blog posts and this being core content. Really tricky because you can't look at the raw numbers. You can't say like a thousand blog posts versus a hundred core content pages because we look at the website overall and we recognize that the core content is like at least a hundred times more important than the blog posts. So just looking at the number of pages doesn't really skew our view of the importance of the content. So that's kind of a tricky part there where it's like you might have a ton of blog posts on there but if we can still recognize that your core content is actually something different then that's not going to cause any problems. That's going to kind of bring you some visitors for the content you're writing about in the blog but the bulk of your visitors are probably still going to go to your core content and we should be able to figure that out. Could it possibly cause any issues of confusion if in a lot of the blog articles we link to the core content? Could that possibly? Okay. Okay, thank you very much. Right. Let's see what else is lined up here. I'm facing problems getting new pages indexed. Let's see, pages are linked, pages are fetched. What else could I check? Could it be related to the crawl budget? This sounds like something you'd probably want to post in the Webmaster Help Forum with a specific example URL so that people can double check. On the one hand from a technical point of view if everything is really set up properly. On the other hand, if there might be some quality issues or other issues that are worth looking into. One thing that we sometimes see is that sites have removed themselves from search completely and if you're not getting any pages at all, after all of these things that you're trying out there then I would double check the removals tool in Search Console just to make sure that there's no kind of pending removal requests there. So sometimes we've seen people remove maybe the HTTP version of their website in Search Console in the hope that they can get everything refreshed. And that's not what the removal tool is for. And what will happen there is we will remove your whole website just like you submitted to us. And we won't index any new pages until that removal request has kind of expired or been removed. So that might be one thing to look at but I really kind of double check this with other folks in the Webmaster Help forums because sometimes there are some easy things that you might just be overlooking. I'm running a server-side experiment using Google Optimize. I have problem deciding which variation to show when a user has cookie support disabled. My intuition tells me I should always show the original variation but the serving variants randomly cause some issues. So in general, if you're doing A-B testing we recommend showing Googlebot the version that the majority of your users are seeing. If you're really doing 50-50 testing then obviously that's more up to you which one you want to show. What I wouldn't recommend doing is randomly switching that version because that makes it a lot harder for us to actually index your pages properly because every time we look at the page we see a different version of the content and that really makes it hard for us to understand are things significantly changing here on this page all the time or is this basically just two versions that are swapping back and forth? So I try to kind of stick with one version if that's at all possible. How can I check which pictures are actually being indexed on a website? I have 150 to 200 pictures on my website but only 100 are being indexed and I don't know which ones. Good question. I don't often have any quick way to kind of double check what you can do there. So one thing you could do is do a site query in image search. I believe this is a bit tricky because with images you have the landing page URL and you have the image URL itself and I'm not 100% sure which of these URLs a site query would react to. So that's kind of a tricky aspect there in general and in practice as well with regards to image search. So in particular, if you have for example a photography gallery that you're providing on your website and you're linking to the images without any context at all. So the title or the text on the page is just like DSC something and photographed by Canon and these are my settings. Then it's really hard for us to tell what this image is about. Whereas if someone takes this image and embeds it in a blog post or any other kind of webpage with additional content around that image then it's a lot easier for us to say well, this image belongs on this landing page and there's lots of information about this image here. Therefore, we can rank this set of pages with the landing page and the image URL inside for these types of queries. So that might be one thing to kind of watch out for especially if you do have this kind of really basic website for those images. So that's where I'd recommend maybe thinking about what you can do to provide some extra content around those images. Let's see, a company that's reselling templatized real estate listings has included a link to us as a part of a template and now we're getting tens and thousands of links. What can we do there? So in general, we do recognize this situation fairly well and we try to kind of ignore those kinds of links. If you can get that updated so that at least it's a no follow link I think that would be the ideal situation. If you can't get that updated then using the disavow file might be an option if you really don't want to be associated with those links but in practice, I assume this is not going to be a big issue for most kinds of sites. So it's kind of a, I don't know, I'd really, if you really don't want to be associated with this template, I'd really try to push a little bit more with the company that providing this template so that you can actually get this fixed at the source so that you don't have to worry about it too much anymore but probably we're already handling that normally. We understand web pages are manually reviewed with page quality rankings. Can we trigger a new manual review request? And there's lots of details there like our site should be ranking for this particular query. So the reviews that we do from a manual point of view are more with regards to the way our different algorithms are working. So it's not the case that we go and review web pages manually and then use that information for ranking. It's more a case that our engineers come up with different variations of our algorithms or different ideas for new algorithms and then they run those variations past a bunch of testers to test a bunch of URLs, a bunch of search results to see which of these algorithm variations is working the best and providing the most relevant information. So that's kind of where the manual testing comes in. It's definitely not the case that we manually review web pages and use that for ranking like this. So there's no way to kind of have a new manual review because we wouldn't do that in the first case. That kind of makes sense because the internet is gigantic and we don't have enough people to actually look at every web page out there that's I think physically impossible. With regards to the other items mentioned there, we're a fast website. We're first to move to HTTPS and we have HTTP too and we spend a lot of money on ads. We have a lot of structured data on our pages. That really doesn't kind of roll into how we do the bigger ranking. So some of these are ranking factors. For example, HTTPS is a really small ranking factor, but in general, these are things that primarily we think sites should be doing for users in general. So it's not that we would take this and say, okay, you're doing structured data, therefore you will rank higher. From our point of view, we use structured data to try to highlight information in the search results. We don't use that for ranking. So we don't think a site is more relevant just because it has structured data on it, for example. So in practice, what you probably need to do there is kind of take a step back and think about what you could do to significantly improve the quality of your website overall. So that our algorithms, when they come across your website, they not only see that from a technical point of view, you have all of these things checked off, but also from a quality point of view, you're really the website we should be showing number one and way significantly above all of the others that are out there for the queries that you're trying to rank for. And obviously that's not trivial and there's no kind of simple meta tag or manual button that you can click to make that happen. Our site index is dropping from 24 million to 11 million when I do a site colon search. What could that be? So in general, the site colon query is not something I would use for diagnostics at all. So if you're looking at the number of URLs you have indexed, I would always recommend using the site maps count, which is based on the URLs that you actually care about. So for example, if you have a calendar script on your website, we might run into the year 30,000 or something trying out all of those pages and maybe we index all of these empty calendar pages at one point, but if there's no content there, we might as well drop them from our index because they're never going to be shown to users. So those are situations where the index count for a site colon query or even in search console and the index status graph, that might go up really high, but actually the pages with unique content on your site is much smaller. So if that goes really high and it comes back down to kind of a reasonable amount, then that's kind of okay. There's nothing broken on your site and that's something you can catch fairly easily with the site maps where if you submit site maps for URLs you really care about, then you can see of those URLs how many are actually indexed like that. I have a question related to download time which is mentioned in search console. I replaced my website with a single page app and now the average download time is over 2000 milliseconds. Will that cause problems? Probably that's too slow for like a normal website. So probably not from an SEO point of view that you would see any kind of ranking issue there, but maybe I should take a step back. The download time that we show in search console with regards to kind of the crawl stats there, that's based on the download of the HTML page itself. So not the rendered version, so not the version that people see in when they open it in the browser, but the HTML file or the individual URLs that we download from your website. So if you're talking about a single HTML page, for example, that takes two seconds to download, then that's before the browser can even start to render that page and display that to the user, which probably means it's really slow to actually display that to users, especially if you have a lot of embedded content and that's averaging similarly slow. So that's something I would personally try to improve significantly and try to get it back down to that couple of hundred milliseconds mark. From an SEO point of view, again, this doesn't really make a big difference. From a crawling point of view, this can make a difference in the sense that if individual URLs that we try to download take so much time to actually download, then we're probably not going to download a lot of URLs from your website because we are afraid of overloading your server. So if you have a lot of content and the individual pieces are taking a long time to download, then perhaps we won't be able to get all of that content indexed. So from my point of view, I would try to fix this, but I wouldn't see this as something that's like a critical SEO issue that you need to resolve in order to improve your rankings. How can I prevent Google from framing my site like this? There's no way to get out of that frame as a visitor and the title is misleading. So I took a quick look at this. I think it's a Google Newsstand URL. One thing you can do in general with regards to frames is to use X-Frame options. I think it's an HDB header or a meta tag that you can put on your pages. And that, from a technical point of view, prevents browsers from allowing your content from being framed in general. And when we see that with regards to Google indexing, we also respect that. So if another page is just a frame of your content and your content has this X-Frame options header set in a way to not allow framing, then we won't associate that for indexing. With regards to Google Newsstand in general, I don't know what the details are with regards to how content gets in there or how that can be blocked. It might be that that goes through Google News. And if that's the case, then you might want to get in touch with the Google News folks. They have a contact linked in the Help Center for Google News Publishers where you can get in touch with them. I believe you can even do a live chat with someone from the Google News team there. What's the difference in reading Google Analytics and blogger statistics? I don't know. So I don't know exactly what blogger shows there. So I really don't know what the difference between blogger and Google Analytics would be there. My understanding is blogger is somewhat simplified view and analytics of course has all of these gazillions of options that you can follow for more details. Probably the general trends with regards to traffic coming and going are kind of similar. I would assume that you just get more information in Google Analytics because it has all of these detailed options. Let's see. Lots of people trying to join and it's full. I guess that's because it's full. It can happen. What's the plan with regards to search console data? Will it always be seven days behind? No, it should catch up hopefully. At least that's the plan as I know it. Okay. A bunch of small questions still left with people trying to jump on in. I realize it's sometimes tricky to get into these hangouts. You have to be quick. Update the Google Plus page quickly to get in on time. I do do these regularly. So if you weren't able to get in this time, try again a little bit faster next time. Usually the first, I don't know, five, 10 minutes, there's still a chance to kind of get in. All right. Maybe I can switch to questions from you all. What else is on your mind? Hi, John. Can you hear me? Yes. My question is, is there an issue with migrating a section of a site to another site? I have two sites that have similar content. One of them is a website. One of them is updated every day, site A, let's say. And another one hasn't been updated in years. I would like to redirect kind of like the blog site from site B to site A. So for example, if I have site B.com slash best coffee flavors redirected to site A.com slash vlog slash top 10 coffee flavors. Would Google see that as something as black SEO or repanelize or something weird, you know? No, I think that's perfectly fine. So kind of joining two sites to simplify them into fewer URLs, perfectly fine. Lots of sites do that. One thing I would just kind of keep in mind when you're doing things that aren't pure migrations from one website to another. So moving from one domain to the new domain, when you're joining sites together or when you're splitting a site apart, you can't easily kind of determine what the final effect will be with regards to traffic from search. So when you're combining two URLs, you can't assume that the final URL that you have will be kind of the traffic from this plus the traffic from that will be the traffic of the final URL. It'd be that there's some differences there because we have to reevaluate the site overall. And sometimes that means you have a much stronger site and we can kind of trust that a lot more and significantly improve that. Sometimes it means you're kind of like taking one good piece of content and combining it with something mediocre and then the resulting content is kind of like, yeah, kind of okay, but not perfect. So those are things kind of to keep in mind. You can't easily determine what the final result will be when you do anything more complicated than moving one site from one domain to another domain. And just to clarify, so for example, if site B is still live, like I'm gonna leave five pages live and then just transfer the blog to site A that's still fine, correct? The Google won't see that as something weird. No, I mean, in general, this is something that just naturally happened. It's not that we would have any reason to call that like manipulation. You're moving parts of your site together to make it easier for maintenance, to make it easier for users to kind of follow along in one place, that's perfectly fine. That's totally up to you. Okay, and I'm sorry, one more thing. So for example, if I have two blogs, two articles inside A or on site B, so let's say best foods for migraines and another article about how to naturally mitigate migraines. And I have inside A an article about everything about migraines, can I three or one, those two to all about migraines and be okay even though they're separate sites? I mean, if this is kind of the new version of the content that you're providing, that's up to you, you can do that. Awesome, well, thank you very much, John, I appreciate it. All right, more questions, what else is on your mind? Hi John, you mentioned the search console indexing report early and I've always struggled with what that actually includes. I just recently had a look and one thing I noticed was AMP pages weren't, didn't seem to be included in it. Could you clarify what actually gets included in that indexing report? In the index data report, so let me see. Essentially those are the pages or the URLs that we feel are canonical within your website. So usually those are the HTML pages that we've known from your website. Sometimes they're things like PDFs and other pages that we also include in web search from your website and show those in the search results. Let's see, it's, yeah, it's mostly that. So not, I believe from the AMP pages, they wouldn't be included because they have the rel canonical set to your other site. So if you have, for example, a separate AMP subdomain probably you'll see that the index status is not very high there. As with any page that has a rel canonical on it, it's not guaranteed that we follow the rel canonical. So if you have AMP on a separate subdomain, you'll probably see some amount of pages being indexed but probably not the full amount. So we'd probably focus on the normal web pages there. Similar, if you have an m.domain, then we'd probably index some of those pages just because we've run across them and for some reason we don't follow the rel canonical there or we've just seen them for the first time and we haven't had a chance to follow the rel canonical. So some amount of those pages will be indexed but the bulk of the pages would be indexed under the normal desktop web version of your site. That kind of makes sense. I just got confused because I looked and I was having 500 pages in the report saying they ranked and then only 200 pages indexed. How can you do that? But if you take out AMP pages, I wasn't sure about PDFs. I also got your report anchor links which I would assume don't go on the index. So anything with a hashtag? What's happening there is we're showing in the search analytics report, we're showing the URLs that we show in the search results and the URLs we show in the search results aren't necessarily the ones that we have as a canonical. So often if you do like info query for that URL you'll see the canonical is actually this other URL. So we might have shown it in search because on mobile we know the mobile alternate version and we show that in search but the canonical that we have is actually the desktop page. So it's kind of weird to hear about this five over 200 kind of mismatch, but theoretically that could happen. It makes sense. Is that the same with HREF link then? If you've got one domain with multiple languages you might get five URLs showing up the results but only one in the index. HREF link would generally have all of the pages indexed because that's kind of how HREF link works. It works between the canonical versions of the pages. So we'd have those versions indexed separately. Cool. Cheers. Right. So I have to run. Yeah, Yun. I set up the next hangout. So feel free to pin that one on Friday or add the questions there at least. All right. So thanks a lot and hopefully I'll see some of you all again next time. Yeah. Yun. Hi everyone. Thank you John. Bye.