 OK, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do is talk with webmasters and publishers like the ones here in the Hangout. And the ones that submitted questions are the ones in the forums. As always, I'd like to give those of you who are here live in the Hangout a chance to ask the first question. If there's something on your mind, feel free to jump on in. Hi, John. I have a question. Sure. So my name is Russell. Well, so my question is, our website is AugustaPreciousMetals.com. They mainly work with individual retirement arrangement. Now, what happened that for different type of campaigns, AdWords, Bing, TV, radio, and different kind of campaigns? So there is a competitor, they're in the second position, though. I mean, we don't understand that it's a brand term. The company name is AugustaPreciousMetals. And they're trying to harm our reputation because it's like stealing our traffic. And they're doing their best to harm our reputation. So we don't understand, we're trying to follow the instructions and everything, but it's a large side. And we've been trying, it's like a year. And they're also trying. They're constantly updating their web page and then doing their best. So I'm not sure. And here's another thing. We have a bunch of people we're investigating. So they have nothing to lose because they're they have a different domain. It's not the brand. We are the brand. So what is happening that these guys, they have nothing to lose. In our opinion, they're trying some blackhead or different kind of shady stuff. We don't know what exactly they're doing. But you just need some suggestion that this is what we should do to push them down. I think from your question that you posted, that they're ranking for your name. Right, exactly, yes. Yeah. So I guess from a general point of view, there is nothing to prevent other people from ranking for terms that you're interested in. So that's kind of like the starting point. It's not like there is a meta tag you can put on your page and say, nobody else is allowed to rank for this name because it's our name and it's kind of restricted to us. So that's kind of the starting point that other sites can rank for your name. If they have content on them, if we think from the signals that it's potentially relevant, then we might show other sites for those names. Things you can do on your side, of course, is make sure that your site is really strong. Make sure that your site is really relevant for those words. You can consider maybe branching out and thinking about maybe social media profiles and seeing if there's something that you could do to kind of put your content on other places as well so that you can have different pages that can at least rank for your company's name. So that's kind of the direction I'd look into there. No, we already have like in a presence and social media presence and LinkedIn, Twitter, any important YouTube in these places. But here's another question I want to inform you that there's another domain which was like owned by the same owner, but that domain, it got a penalty. And the previous guy who used to work with us as a search engine optimization, he submitted, without doing anything, he just straight, he submitted his request that, okay, we made a mistake and then blah, blah, blah. So just lift that penalty, help us out. But they rejected. And then we decided, okay, it's getting really difficult. And then we said that, okay, we have to take our site down. And that's what I did. And do you think that it's somehow related? I mean, is there any distant relation between this that, okay, the same owner who owns both domains? I didn't see that as being a problem. So that's something that can sometimes happen. You make a mistake, you put something up on a website that you afterwards realize that you probably shouldn't have done and you take that down, that's life. It's not that our algorithms would hold a grudge and say, oh, this guy did something wrong like once. Therefore everything they do will be like under extra scrutiny. So we essentially treat your site as it is now. So it's not that we're saying based on things that happened five or 10 years ago, we will treat your site a little differently now. So I guess what I would do in a situation like this where you're kind of stuck, I'd go to like the Webmaster Help forums and get feedback from other peers as well. Because sometimes there are things you can do to kind of like boost your Instagram profile or your Twitter profile and kind of help work on your reputation like that to kind of spread the word and make sure that all of these different places are actually relevant for your name. And if they're relevant for your name and if they have information that users like then that's something we generally show. But it's not the case that you can like say, focus on a competitor's website. I want to push this guy down in the search results. You have something to kind of replace it with. This name is Augusta Gold IRA. And if we, sorry, we see that, okay, we're still getting visits for this name or time Augusta Gold. And the new term is Augusta Precious Middles. And we have some like, you know, previous search social media profile which are tied to that name, Twitter, Pinterest. And like I said, then there's a guy who used to work with us and we have no control on those accounts. So we're like, we don't know what to do. And about that specific company, they're actually not one, they're just one, they're a couple others, they're doing the same thing. And I, it's our brand, Augusta Precious Middles. How come they're in the second position that they shouldn't be so close? It's also, I think good for you because you can focus on your site and move it forward while they're spending all their energy focusing on competing rather than on building up something that works for the long term. No, it looks like that, you know, they are like someone is financing them a competitor or a bunch of come to maybe financing them to do something like this, to harm or reputation. It's really, I mean, it's really difficult. Yeah, I mean, this is something where I'd really focus more on your site and make sure that you're pushing yours up rather than kind of like concentrating on what they're doing and constantly trying to follow up and kind of battle like that. Because everything that you spend focusing on your site is something that's valuable for the long term. That's something that you'll continue to kind of keep building on. Whereas everything, every energy you waste on the competitor is something that's lost. It's lost time. It's a lost opportunity that you could have spent on your site. Right, yeah, we truly believe that because this is not our goal to like to fight for our brand time. I mean, our goal should be like the times we live into our business. But now the main reason is like the amount of money we're spending for different kind of marketing stuff. And they're just stealing our traffic and giving bad impression to our clients and customers. That's the reason. But anyway, thanks for your suggestions. Sure. All right, let me run through some of the questions that were submitted. And we'll almost certainly have time for more questions from you all as we move along. Our e-commerce website is suffering from Panda. And one of the things we've identified as needing improving is that a couple of our categories are too general and not specific enough. My question is, does Panda take site architecture into account when doing a Panda score? Or would fixing those categories make no difference at all? So from our point of view, when we look at Panda, we see that as something that's more of a general kind of quality evaluation of the website. And it takes into account everything around the site. So that's something where if we find issues across the site where we think this essentially affects the quality of the website overall, then that's something that might be taken into account there. So if you're saying that your category pages are really bad and that's something that you really need to improve, then that's something I'd work on. I'd work to improve. Is there any way that you could improve the format of these Hangouts? Since you're doing them on YouTube, it sucks to try to find a question. Yeah, that's good feedback. So one of the things we've been looking into doing is more shorter topical Hangouts, not Hangouts, like topical videos as well, that are focused on one specific question. So I hope that makes it a little bit easier. Otherwise, I don't really have any magic trick to finding individual questions within the videos. I realize that's sometimes tricky. Do you have someone transcribe them, John? Sometimes. Sometimes we have people that transcribe them. Yeah, the German Hangouts are really transcribed a lot, but you kind of need to understand German for that. So your advice is all learn German? Yes, yes, that's good. Wasn't there somebody who used to publish an index of the questions and the time points? Wasn't there before? Or is that a long time ago? It used to be that Hangouts would do that automatically. So it would take the submitted questions and link to them. But since the Q&A app disappeared, and I don't know, it's like all downhill or just all changing. So yeah. No, we'll find ways to improve it. Let's see. Question about content and ranking for a specific keyword. I understand that you used 200 odd signals to rank, but if I wanted a piece of content to rank for a particular keyword phrase, does that phrase have to exist on that piece of content or would Google be clever enough to work it out based on other factors? We can definitely work that out. So in extreme cases, you might have your page blocked by robots text and we see nothing of the content and we can sometimes show that in the search results. However, if you really want to target a phrase or some specific thing that people are searching for, it does make it a lot easier for us to actually have that phrase on the site or on the page. So that's something where if it's really important to you that people find your website for a specific term, then by all means do put that on the page. It's not the case that you need to put all variations of those terms on your pages, but at least let us know this page is about blue shoes rather than just having a big photo of a blue shoe and no text at all about shoes. So you kind of need to have a certain amount of directness in the text. And that helps users too when they click on your page and they actually see what they were looking for that kind of helps them to understand, oh, this is really what I wanted. A little bit of background noise here. I have a question about image optimization. Google is getting better and better at object recognition. Does this mean that we no longer have to optimize our images or pictures with descriptive file names, alt tags, title tag, et cetera? I could see that happening in the future at some point. At the moment, this all really helps us a lot to understand what these images are about. So having clear file name and alt attribute on the link to the image, title tag, all of that helps. A caption below the image, kind of the text around the image as well, all of that helps us to understand what this image is about. So that's something I wouldn't skip, at least not for the moment. I could imagine, I don't know, maybe five, 10 years in the future that things are working so well with images that we just need to see that image once and understand exactly what it needs to rank for, but I don't see that happening in the near future. Can we talk about structured data for a minute? I'm not understanding how we're supposed to be marking up content on our site if we use an approved third-party aggregator making... Let's see. I think this goes into how to add reviews to different variants of products. So if you're selling a T-shirt and you have one page for a T-shirt in large, one page for the T-shirt in small, like how do you put reviews on here? Do you separate them out? Do you put them together? I think this kind of runs into the general topic of how you should group your pages. And for the most part, I would really recommend grouping pages based on the primary topic of the page. So if you're selling T-shirts and the difference between the different sizes is essentially just a variant of the same product, then I'd recommend just using one page for that, one URL so that that URL can be relevant for that kind of T-shirt type query, for that T-shirt type of information. And the size is something that people can just choose. It's just a different variant of the same thing. And that makes it a lot easier for reviews as well because then you have the reviews focusing on the product itself and the variant is essentially just a different version of the same thing. It's not a completely different product. On the other hand, if the products are so different that you say like a small T-shirt and a large T-shirt for me is like different materials, different cut, different print, different colors, maybe then maybe it does make sense to separate those out and then it does make sense to have separate reviews for those different products, essentially. But this isn't something where there is one size fits all solution. Oh, that's a terrible comparison, I guess, for T-shirts. But where you have to kind of figure out yourself what works on your site, how people want to find products, where they see the difference between the different products and the different variants of the same product. I noticed some websites have their blog subpages set up with a canonical tag pointing to the blog's main page. As a preferred version is that the correct use of the canonical tag, since these subpages aren't true copies of the blog's main page. That's actually incorrectly set up. So that's something where we've seen sites in the past over the years set up a canonical tag like this where all pages within the site point to the home page. And from our point of view, that's an incorrect usage because these pages are not equivalent. You can't swap them out and have the same thing. And what usually happens there is that we see the canonical tag and we ignore it because we understand that probably this was a mistake from the webmaster side. So just because someone else is doing that doesn't mean it's a good thing to do. Sometimes, even if it's a big company that does something, doesn't necessarily mean it's a good thing to do because big companies get things wrong all the time as well. John, what do you think then of no indexing subpages? You can do that if you really don't want them indexed. Fine. I mean, that's kind of your decision. What kind of happens there, though, is that if anybody were to link to those subpages directly, then those links would kind of get lost. We wouldn't know where to point them to. Obviously, links on those pages, even if they're no index, they still kind of spread the page rank around. But if there's nothing to index, then any links going there, they kind of get stuck. Does Google ever block a specific domain for crawling? Usually not. So I think this long question goes on into we migrated to HTTPS. And every time we try to fetch a URL as Google, it comes back as an error. So I kind of need to double check the domain to see what is actually happening there. But I'm not aware of us locking any kind of crawling for a specific domain. The only time where I've kind of seen that happening is if you're hosting on an IP address that is somehow blacklisted in general. So not something where it's blacklisted for email spam or something like that, but if it's, what are they called? I think Bogan IP addresses or something crazy like that. So that might be a problem. But otherwise, it's not the case that we would block crawling of any specific URL. So if you're consistently seeing errors with regards to Fetch's Google, with regards to normal crawling of the site, then that sounds like something that's actually on your website itself on the hosting or on the network where you're kind of hosting this website. So usually what I do in a case like this is to see if there are other websites hosted on the same server and double check those websites if they work. So you could double check with the mobile friendly test, for example. That's a really quick and easy way to see what a Googlebot user agent from Googlebot IP addresses would be able to see. And by checking another website on the same server, you can kind of see, is it something that's specific to the server? Like can Google not reach that server with the network connection somehow? Or is it something specific on your website itself? So a lot of times, we'll see things like bot protection plugins or DDOS protection setups on a server or on a website. And they might help protect against bots, but they might also protect against Googlebot, which is probably not what you're trying to block. So that's kind of the rough approach I take there to kind of narrow down. Is it the server? Is it the network? Is it my website? Is it something I have in my settings? Can I assign both X default and EN hreflang tag to the same page? Yes, you can. So you can apply all of those levels. English, you could even say this is English for UK, and it's a default page. That's perfectly fine. You can do that. So X default doesn't have to be a separate page. You can also specify multiple hreflang tags going to the same page. So you could say this is the page for UK English and Australia English. This one is for American English. And the UK version is the X default that I want to use. So all of those combinations are possible. Multiple alphabets, same language. My site has content and Serbian language, both in Latin and Cyrillic. The ISO 639-1 only defines Cyrillic. I guess that's the character encoding. Is there a way to implement hreflang tag? And if not, will the situation cause duplicate content issues? So first of all, this would not be a situation with duplicate content, because assuming your Cyrillic and your Latin text is on separate pages, we would see that as separate characters, separate words. It would definitely be separate. And even if it were the same, what would happen there is we would try to just recognize it as a copy and just rank one of those versions in the search results, so there's no penalty for having a copy like that. So that's kind of the easy part. No, don't worry about the duplicate content. With regards to hreflang, I believe there's a way to specify in the hreflang that you're saying this is in this alphabet in Latin or in Cyrillic for the different country codes and language codes as well. But I'm not sure exactly how that is set up. So I would look at the what is it, the ISO code that we specify for hreflang on the Wikipedia page. There's some information on what variations you can do there as well. And in the worst case, if you can't specify the different language encodings, the different character sets, then you can say both of these are actually in Serbian and we will try to figure it out on our end. So it's not something that's 100% critical that you need to get it exactly right. Does it make any harm to a single page to add 30 podcasts in an iframe? No. So from a search point of view, it doesn't matter at all. From a user's point of view, it might be that having 30 podcasts in iframes make the page extremely slow to load. So that's something I'd look into. But just having podcasts on a web page is perfectly fine. Nothing against that. One thing maybe to mention with regards to podcasts is that obviously we don't know what's within your podcast. So if you're just linking to a podcast file, maybe in a player, then we don't know what you're talking about in the podcast. So having some kind of textual information on that page helps us as well so that we can actually rank that page appropriately. So if it's a podcast on a specific topic, if it's a podcast where you're discussing something specific, then give that information in a textual form so that we can understand what this page is about and show that in the search results. If you have a transcription of the podcast, that's even better, because then all of that information is available for us to actually index. If I have my brand in three different pages, but there is no correlation in the content and structure between them, should I add the hreflang tag, maybe just to the home page? So I'm guessing three different languages, not different pages. Yes, hreflang tag can make sense there. The hreflang tag should be between equivalent versions though. So for the home page, I'm guessing that definitely makes sense. If the other pages are not equivalent, then I wouldn't use the hreflang tag. So you can think about it like this in that if someone is searching in a different language, we would swap out those URLs in the search results. So if you have a German page about shoes and an English page is about airplanes, then if someone is searching for shoes and we show the URL for airplanes, then that doesn't really work. So that's kind of what you should be aiming for there. They should be equivalent pages that we can swap out on demand. Does a search box in snippet make sense for big marketplace? Do you have examples of success cases? It can definitely make sense, where I mean this is something you kind of have to think about for your own site. The thing to keep in mind with the site link search box is that we don't show it just because you have the markup. But rather, with the markup, we're able to show the version that you prefer. So if we don't show any site link search box for your site, for queries for your site, then adding that markup isn't going to change anything. It's not going to encourage us to start showing that. But if we're already showing a site link search box and we're linking to a site colon query for your site, and you say, well, I prefer people search within my own site because I have a better search results internally, or maybe I can place some more marketing material around the search results, maybe an ad or something, then that's something you can do. So once we're showing the site link search box, that markup helps us to pick the version that you prefer to have shown. Can you please look into why this info query is showing a different website? It looks like they're all copying the content and potentially redirecting what's up with that. So I have to double check what's actually happening there. It's hard to say. It looks like you link to a forum thread. So I'll copy that to the site and see if I can check that out as well and potentially add something there. So what I've sometimes seen is that sites get hacked and they have a weird cloaking behavior in that they're redirecting to other sites, and then we pick the wrong URL to show. So that's something that might be happening here. But I need to double check. When using hreflang tag to indicate language versions of a page, it's common that the rel canonical of the page must be the same as one of the hreflang versions. So if I have three versions for English UK, US, and Canada, if the original content is a Canadian version, I imagine setting the canonical to the Canadian URL. No, that's actually wrong. So that's something I often see in these Hangouts that people talk about. The difficulty here is with the canonical tag, you're saying that you prefer to have just that version index. So what will happen there is we will focus on the Canadian version and we'll say, oh, this is great. We'll see the canonical tag on the English and the US and the UK pages and say, oh, it says to focus on the Canadian version. So I'll just ignore the UK and the US versions completely. And only index the Canadian version. So what would happen then is we would only have the Canadian version and in the US and in the UK. We would also show the Canadian version because that's the only one that we have. So setting a canonical across language or regional versions would be wrong. Setting the canonical to its own URL is perfectly fine. So having the Canadian version with a canonical to the Canadian version, the US version, canonical to the US version, the UK version, canonical to the UK version, that's perfectly fine. But doing that across language and country versions would be wrong. John. Yes. Here's a question I didn't come here to ask, but it's just funny that you were talking about that. I recently had some discussion around this. If a site is publishing multiple N locales, so they're all duplicate English locales, but they're in different countries for various reasons. They may have one for the UK, Ireland, the EU, US, et cetera. If they're using href lang and they canonicalize to a single version of those, from my experience of that, it works as I expected to. It will canonicalize to one version. And if I go to Google.co.uk and search, it will show me the UK version based on the href lang. Now, is that just because I'm jinxed and I'm getting that? Because a lot of people seem to argue that actually no, because you're telling them the canonical is something else. So they're ignoring the page. But I'm saying that there's also a href lang signal in there. I know the original setup for this caused a lot of confusion because of the canonical and href lang, but is it wrong if you have multiple N versions to canonicalize them to a single version and use href lang? So what can happen is that we pick up the href lang, we swap out the URLs, but the title and the snippet and the rankings will be based on the canonical version. So if you have UK and US version and one of them has dollar prices, the other has pound prices, and you pick one canonical, then we will show those prices in the search results. So that's something that's often kind of frustrating then. Because someone in the US is searching, we show the US URL, but we show the pound prices in the snippet. Yeah. But it is possible to use href lang and to canonicalize to multiple pages to a single version. Sometimes that works, yeah. OK. That's a bit of an iffy answer. We try to recognize situations where people are using it the way that they shouldn't be doing it and try to fix that. But I wouldn't say that's the correct way to do that. So you would advise against doing something like that? Yeah. OK. John, can I just ask a question about href? Sure. Because I run a site that's for the DAC region. So when you're applying the href lang to a site, can you do it in the home page to make it site-wide? Or is it you have to do every single page? It's per page. Per page. OK. Perfect. Thank you. Let's see, there's a question in the chat here. Do you know the Google site search is going to shut down and transfer to custom search? What's the impact? I can't find any details about it. So I believe this is the site colon query in normal search results. But it's that special search setup that you can have where it searches within your website. And I think that the, so I don't work on this theme. I'm just like seeing this kind of from the side. I think the idea is just to combine it with the general custom search engine setup because they kind of do similar things. So the site search is based on just one website. Custom search setup can be about different websites or one website as well. So I suspect the implementation is slightly different. The search results might be slightly different. I don't know how these are actually set up. But from a purely practical point of view, it's kind of like taking two and combining them into one. OK, so John, do you know there's any other forum about talking about this site search or help forum or help center so I can come to them? I don't know. I know we have or we used to have a help forum for the custom search engine setup. So that's one place I would check. I don't know if that actually still exists. Let me just. OK. So that's something I would kind of search for. Yeah, I think there's like custom search help forum. I see, I will look into that. So that's one place I would post to kind of figure out what's up, what you need to change, what the details are. Thanks. Let me just copy that into the chat. Let's see. A trend I noticed over the years is that Google is giving more weight to internal subpages in the search results compared to the site's homepage, especially when a user searches for something more specific. For the most part, Google still gets it right. But with more importance seemingly being based on internal pages, I continue to see more examples of where Google gets it wrong. So I'm not aware of any plan change in our algorithms where we would say focus less on home pages, focus more on internal pages when it comes to rankings. I assume this is something that just happens over time. On the one hand, depending on the websites that are involved, on the other hand, depending on the different factors that kind of come together when it comes to ranking. So I'm not aware of anything specific where someone has a dial that says more home pages, more individual pages, and they're kind of tweaking that. So that's kind of said. If you're seeing search results that are bad, whether it's with home pages or subpages or just generally bad, then I would definitely submit feedback. So on the bottom of the search results, there's a feedback link that's looked at by the teams here. If you're seeing broader issues where lots of search results are bad or very generic search results are really weird, really broken in a way that they don't really make that much sense. And it's hard to kind of write that up in a feedback report. You can always send that to me directly as well. And I can pick that up with the team here to see if there's something that we can improve there. The kind of search results that we generally don't focus that much on when it comes to individual escalations is if you're saying on page 5 of this query where I take 12 words and quotes, there's this page that is kind of bad. You should swap that out. So on that level, it's like, well, you're probably the only person that has searched for this specific setup and looked at page 5. So spending a lot of engineering time on tweaking the search results so that this page on that specific item there changes is probably not something that the team will do. But if you're saying this specific generic term in the top, I don't know, five search results, all of them are really bad and it's kind of a really bad user experience, then that's something where the team would say, oh, well, lots of people are looking at this. This affects not just this specific generic query, but lots of other queries. Therefore, we really need to spend some time to understand what exactly is going wrong here and what we need to do on our side to improve that. And sometimes they can make changes very quickly. Sometimes it's like some engineers take this up and say, oh, this is a problem we need to solve, but it's really, really hard. We're going to go away for a year and work on this and come back with some changes. Once we've done a lot of tests, that should help improve this. But regardless, feedback is always important for us there. I wanted to ask. Sure, go for it. I wanted to ask a question, too. And that is like, there was a PBN for a site which was being ranked on the top result. I reported the PBN twice in thrice, but I did not see any result or any action being taken towards the owner of that site. And that site had been heavily spanned by the webmaster. But there was no official action taken by Google. And even after the formings and twice of thrice, no action was taken. So what should one do in such case? That's sometimes hard. On the one hand, the WebSAM team does take a look at those reports. But they also need to kind of prioritize to see what they're going to focus on. So one common scenario that I've seen is that a site might rank well despite doing a lot of really bad things. So it's something where we might be demoting a site, but it's still really relevant based on other factors. And in a case like that, it can happen that even if the WebSAM team has taken action, it's still showing up in the search results. And that, from my point of view, can be fine. So for example, a situation that we often see is that a site is doing lots of keyword stuffing. And we look at that and we say, well, they're doing keyword stuffing. Keyword stuffing is against our webmaster guidelines. Therefore, we should take action. But when we look into the back, we see our algorithms are already ignoring all of this keyword stuffing. So this website is doing all of the spammy keyword stuffing, but it has no effect at all on their site. It's ranking based on the other things that it has. Basically, they're spending lots of time doing spammy things, but we're ranking it based on the good things that they're doing. It might be a lot of work. One more thing I wanted to ask you is that you are talking about images. And I wanted to ask, what effect does the title of the URL of the image based on Google understanding what the image is about? So the title is XYZ123, and the image is about flower. What effect do it have on the image and regarding to Google understanding the image? So we do take a look at the URL for images specifically. And we try to use that when it comes to ranking. But if the URL is completely irrelevant for the query and nobody is searching for that term, then that doesn't really help us. It doesn't change anything. So if you have a picture of flowers and it's called 12345, then that's not something helpful for us to understand that this is about flowers. Whereas if the URL is called flowers, then we can understand, well, it says flowers in the caption. It says flowers in the URL. Maybe this is really a picture of flowers. So it's not so much that this URL would disappear because it has an irrelevant file name, but it's just we aren't really sure what all comes together there. So it's a good practice to use descriptive file names specifically for images. Thanks. All right. I got a message in Search Console that Google has identified an increased number of URLs that can't be accessed without permission. These pages present either a login page or forbidden response code, HTTP 403. What does that mean? So when we flag this in Search Console, it means we've crawled a lot of pages on your site and we essentially got a login page or we got a page saying you're not allowed to access this page. So what happens in that case is we can't index that content. And those URLs drop out of the search results because we think maybe this is private information. Maybe this is confidential pages. That shouldn't be indexed at all. So that's something where if you check those URLs and you see, oh, these are all links to my admin panel or these are all links to something I don't want to have indexed, then that's fine. They're not supposed to be indexed. Seeing errors like that is perfectly correct. On the other hand, if these are URLs that you do want to have indexed, then I would double check to see what is kind of happening there. One thing that sometimes happens is that a site has some kind of security plugin installed. And when Googlebot crawls, it runs into a security plugin that displays like a login page or a capture or just like returns don't, you're not allowed to access these pages. And that's something that would be problematic. So you can test with Fetch as Google with the rendered version of Fetch as Google to see what these URLs are actually looking like to Google. The hard part is sometimes when this sporadically happens. So if you check and everything looks OK and the errors go away after a couple of days and then they come back again another day, then that's kind of a tricky situation because sometimes they're shown, sometimes they're not shown. Then I double check maybe with the hoster to see if there's something on their side that they're sometimes turning on or off or that's sometimes being turned on or off automatically. So maybe Googlebot is crawling so much that the hoster is saying, oh, you're not allowed to access anymore. And that's something that the hoster might be able to control for your website. In the worst case, if Googlebot is just crawling too much, then you can let us know in Search Console as well in the crawl rate settings to say, hey, you're crawling way too much, please crawl a lot less. I manage a multilingual site in a subdomain or subfolder slash en slash es slash it and the mobile with a specific m.domain. Structured data with JSON will be the same across all domains. The same across all domains. Yes, I would recommend putting structured data on all of those pages and putting the same JSON LD or linking to the same JSON LD files on there so that when we switch over to mobile first indexing, then we can actually pick up that structured data directly. Also the same with the AMP links. The canonical links still point at the desktop pages like before, but having a link to the AMP page from the mobile page is a good idea. Some more questions with regards to mobile index. We currently hide a lot of text. I heard that's bad. What should I do there? So yes, on desktop, if text is hidden, then by default, we will treat that as being less important. On mobile, we understand that it's a lot harder to create mobile URLs with the limited space, so that's less of an issue there. So then there's a question about crawling. If we push a 304 HTTP code not modified to Google, will Googlebot still crawl the page if it has that page in its index? And the answer to that is a definite maybe. So the 304 response code says not modified, but it only makes sense to return a 304 if the request has an if modified sense header. So some requests we send to sites are just normal get requests and say, hey, just send us that page. Some requests that we send are like, send us this page if it has changed recently. And if you see the if modified sense header, then sending a 304 tells us, oh, no, this didn't change. You don't need to look at the content. So we will essentially just bounce off and say, OK, fine. We don't need to look at the content. You're telling us it didn't change. That's OK for us. But we don't always crawl with an if modified sense header. So it's not the case that we would always bounce off when we see a 304. Our site has extensive set of outbound links, thousands that have nofollow attached. These link out to reference sites and are useful information. If we remove those nofollow links in one go, will that be positive or negative? If these are natural outbound links that point that useful resources for users, I don't see any reason to keep that nofollow. And switching from one version to another, I don't see anything that I'm aware of that would flag this as something problematic. So if that's something that you added a nofollow in the beginning because you're kind of worried and unsure, and now you're sure that this is actually really good content that those are links that you do want to have applied to your site, then it shows off. That's perfectly fine. I have a lot of video content that my sites download. How can I leverage this volume of data to increase my rankings? I store the data on Google Drive, and it's completely separate from my website. Would a link to the Drive file from my website help the websites SEO? Or would I need to store the actual files on my web domain to have that effect? So from our point of view, it's not that you would get a boost in SEO if we recognize videos there, but we could show the video snippets in the search results, which sometimes drives a lot of clicks to the page as well. So that's something that might be an option there. We could also show the page in video search results. So if someone is specifically searching in that video mode where they clicked on videos on top and they search for something your site has content on, then we could show your pages there. So that's definitely an opportunity. With regards to where the video files are hosted, that's completely up to you, provided we can crawl to there. So if you're hosting these on YouTube, that's fine. If you're hosting these on your own server, that's fine. If you're hosting them on a CDN or some other video server, that's fine. I'm pretty sure that if you host them on Google Drive and they're publicly accessible from Google Drive, then we would be able to use those as well. I'm not 100% sure of how the Google Drive video embed actually works, but my understanding is that it's pretty much the same as a YouTube player, so that probably would just work. I double-checked this with maybe Fetch and Render in Search Console to see that we're able to pick up the video player UI on your pages wherever you're hosting them. Recently, I started working as an SEO in a company that works on an affiliate site and over 100 sites that we use as traffic generators to that site. When I started working on a backlink audit, there were 4 million backlinks to those sites. All of them follow with the same anchor text. I've removed all of these spammy links a month ago. Is it a bad thing that I removed it all in one day? How long will it take for Google to pick up those changes? So first of all, it sounds like you have a lot of work, so that's going to be an interesting opportunity there. In general, if you've removed these links, then they will be gone. So if you put them in a disavow file or if you have control over the sites and you clean that up, then that's something that is perfectly fine from our point of view. We'll take them out of the link graph over time, but this isn't something that happens from one day to the next. So if you actually remove the links, then in Search Console, it'll still take a while for that to update. If you disavowed them or changed them to no follow, then Search Console will continue to show them there. So that's something where if you're putting a lot of links into a disavow file or changing them to no follow, then you just kind of have to keep in mind that the Search Console data is based on all links, including those as well. John, can I be rude and ask a question? What? Can I be very rude and jump in and ask a question? All right, go for it. For two. The first question is quite easy. For a large brand and for a branded search term, why would Google show the login page that our app rodded in their home page? Any ideas? I don't know, because we mess up. So is it like a mobile app? Sure. There are apologies there, my dog. Don't send the dog. They are an app. They're a very, very big app, a big brand. And in one of their locales, branded searches are bringing up their login page as their first result, and then it's got their Twitter feed, and then it's got their home page. And it's a very odd result, which they're the football by. And I suggest maybe because users are more interested in the login page on the branded term, but they're not happy with it. OK, yeah, I mean, this is something where they could return a no-index as well. Then we would just pick up something else from the website. So that might be an option. I don't know. Yeah, it's just one of those. Anyway, the second one just comes to the fetch and render. And any ideas on best tactics for fetch and render for JavaScript or Angular-type stuff where we can't debug based? We can't see what Google sees, because you're not showing us in the cache what you're rendering. You're just showing us what you're pulling down. Like, we're looking at outputting something just for Googlebot just to see some debug information. But any thoughts on best practice for that? Yeah, that's really hard. So the test pages that I've been making for myself to kind of figure this out essentially evolve around JavaScript kind of print statements on the page, essentially. So not putting these in the production pages but creating some test pages. I put these up on GitHub just to have a simple place to kind of just focus on the JavaScript side and see what happens. And that's kind of how I do that there. It is tricky, especially when it comes to things like structured data, the meta tags, the head section of a page, because that's not easily kind of pulled out with JavaScript. And you don't see that when you render a page. You just see the screenshot. So I know the team is working on expanding that so that you can do a fetch and render and see the HTML version of the page as well, which I think makes a lot of things a bit more clear. But I don't know what the timeline is on that. OK, cool. Thanks. Sorry for jumping in. No problem. Let me just run through the two questions that are left. And I think I have this room a bit longer so we can kind of pull over a bit as well. I have a multi-location site. URL structures are per the location and services, but the business is in London-based. Is it possible the business can rank in a local map of Manchester or not? What factors affect the local map ranking? What do we need to maintain? So I am not sure how the local side ranks everything. That's essentially run on the maps side of Google. So I don't really have the local ranking factors that I can give you. As far as I know, the important part is really that you have local listings. So that you have a local listing that's linked to your website ideally so that we can kind of show that combination. But I would double check, post maybe in the local business forum to get advice from people who do actually know what they're talking about. We've seen a big drop in the amount of index pages on our mobile site, so m.example.com. When doing a site search, we're seeing less than 600 results. You see a lot higher. Search Console says there's still 3,000 pages. What's up with that? So for the most part, if you have your mobile separate URL set up properly, you shouldn't really need to worry about the index version of that page because we'd be indexing the desktop page primarily. So the m.pages would have a rel canonical set to the desktop version. And the pages from the m. that you would see index are more the temporary pages where we've seen the page. We haven't processed the canonical, but we can show it to you if you do a site query and search for it specifically. So that's something where focusing on the index count for a mobile page doesn't really make sense. It doesn't really give you any information. I'd really focus more on the desktop index pages and take a look at the mobile-friendly reports in Search Console to kind of get an aggregated view of how we can recognize the mobile versions of your page. HTTPS as a ranking factor for those sites with information only, is that going to be a problem? So HTTPS is relevant for any type of website. It doesn't matter if it's information only. It can be just as relevant there. So the thing with HTTPS is not that it's only meant to encrypt information like credit cards and passwords, but it also ensures that the information that users see is actually what you're providing. So one thing that's really common is that free Wi-Fi providers, hotels, sometimes airports, they will be able to take a look at HTTP pages and they'll add tracking information or they'll add ads to those HTTP pages. And even if your page only has information on it, if it suddenly has crazy ads on it or tracking information that you are providing, then that's something you probably care about. And with HTTPS, you can ensure that users at least see the content the way that you've supplied it from your site. So just because it's information only is not a reason in my mind to say I don't need HTTPS at all. All right. I'd like to ask a question. Sure. Go for it. Yeah, I wrote an article for a keyword, meaning I wrote an article naturally. And I had a keyword in mind which I was writing a topic, article about, sorry. So the thing is that I wrote the article, published it, googled it, but I could tell that my article was one of the best you could ever find on the base of that keyword. But still it lands on the eighth page and I have not built any spammy backlinks once which existed are disallowed. So why is this happening and what can one do to get the natural rankings? Those are probably already the natural rankings. So just because it's one article and it has those keywords and it doesn't necessarily mean it'll rank well automatically. So that's something where essentially you just need to keep working on it and keep making more good pages on the website, keep building kind of a following for your website, making it something that people want to go to on their own, that people link to on their own, all of these things kind of add up. And I'd say no magic way to just rank for a keyword, even if it's a good article. If we don't understand the website, if the rest of the website is something we're not really sure about, if nobody's linking to it, then our systems might say, well, it's not bad on its own, but it's also not the best. Or we don't know enough about it to understand that it's actually really important and really good. So this is something where you have to persevere and keep working out, keep creating good content and making sure that your website kind of grows and value over time. And one more question if you don't mind, please. I think around February 7th, instead of my traffic dropping, it increased. So was there any Google update during February 7th or that week? Probably. So most of the updates that we do, we don't announce. We make hundreds of changes every year, and that, on average, ends up that, I don't know, we have a couple of changes every day, more or less, that roll out. So that's something where probably we have made changes. I've seen lots of reports around those dates, so you're probably not alone. What sometimes happens that can be a bit confusing is that we roll out multiple changes on the same day. Usually, this is more accidental, because they're ready at some point, and we just roll them out and turn them on. It's not the case that we would say, oh, we'll purposely put two or three changes together and confuse everyone. Our goal is primarily to improve the quality of the search results. It's not to confuse webmasters. But these kind of changes, they happen all the time. Some sites go up, some sites go down. That's essentially the way it happens on the web. Hi, John. You covered around structured data earlier on. How much emphasis is actually put on micro-formats? Because obviously, a lot of information is online about schema market. But how much around H-entry and micro-formats are you using these days? Because Google Webmaster shows it on that level. And trying to find information is quite difficult. Yeah, so there are a couple ways that you can specify structured data. And from our point of view, if you're using one of the supported ways, then that's fine. We don't differentiate and say, oh, schema markup for this item is better than micro-formats markup for the same item. If we can pick up the markup, that's perfectly fine. One thing I'd avoid doing is using different kinds of markup for the same items. So if you have breadcrumbs on a page, don't specify them with JSON-LD and with schema and with this and that. But rather, pick one format and stick to that. That makes it a little bit easier for us because we don't have kind of this clashing markup where one says this and the other says something different. OK, great. Thank you. Hi, John. Can I ask a quick question? Sure. So I asked a question earlier in Google+, about the mobile subdomain that we have been seeing lots of index pages. So you mentioned that we should canonicalize all our URLs on the mobile site to our desktop. Was it that you meant? Yes. All right, cool. Thank you. So we have a document in the developer site on how to set up separate mobile URLs. And it has information on the canonical from the mobile to the desktop and on how to link to the mobile version with the alternate tag. That's what I would focus on there. But it is recommended by you? Yes, definitely. To canonicalize all mobile URLs to the desktop site? Exactly, yes. All right, just want to clear that up. Thank you. All right, any last questions? Anything else still on your mind? OK, that's good too. In that case, I'm sure you're all going to have a fantastic weekend. I'll be setting up the next batch of Hangouts probably later today. So if there is anything that does come to mind, feel free to add it there. Or feel free to jump into the Webmaster Help forums and post there. There are always good people there that can help with a lot of questions too. So with that, I'd like to thank you all for joining. Thanks for submitting all the questions. Thanks for dropping by live. And wish you all a great weekend. Bye, everyone. Thank you.