 OK, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do is these Office Hours Hangouts, where webmasters and publishers can come in and ask any kind of web search-related question that maybe we can help answer. And it looks like some of you are here already. If any of you have any questions to start off with, feel free to jump on in now. Otherwise, I'll go through some of the questions that were submitted. All right. Then let me run through some of the stuff that was submitted already. So we tend to put these out on Google Plus maybe a week or so ahead of time to get a chance to collect questions and plus ones for questions to figure out which ones we can answer. So I'll just start from the top. Can page relevance affect site quality as well, essentially? So if one page is really relevant and useful, does that affect how Google looks at the quality of the website overall? Well, one page is obviously a part of a website. So that's something where obviously there's some connection there. But when you're looking at individual pages and you're looking at a bigger website, then it's hard to say that this one individual page that is really good is affecting these hundreds of thousands of other pages on the website. So we do try to look at things on a per page basis as much as possible. But we do also try to look at the overall picture. And the overall picture is obviously affected by how individual pages work as well. How did the FRED update impact the e-commerce websites? Why did so many e-commerce sites see a drop in ranking? What can we do to recover that loss? So from our point of view, there was no FRED update. This is a name that was given externally to a bunch of updates that we've been doing over time. So it's not the case that there's this one thing that is changing in search. We make changes in search all the time. And we're always working on trying to find ways to bring more relevant, more high quality content to users. And that can be affecting a variety of websites. And a variety of areas where we show content in search. So it's not specific to anything e-commerce related that we would say, if this is an e-commerce website, then we will show it lower in search just because it's an e-commerce website. I don't think that kind of connection would make sense. We try to do this really on the basis of what we think is relevant and useful for users. So if your website is seeing strong fluctuations with regards to any general search changes, then I would recommend maybe taking a step back and thinking about what you can do to significantly increase the quality of the website overall. And that's not like a simple thing for most parts. That's really something that sometimes takes a bit of work to really kind of take a step back from the thing that you've been focusing on for so long to really get objective feedback from other people as well and to think about what you could be doing to not just tweak things a little bit but really take it to the next level. Can you confirm that Google assesses a site for Panda based on the pages that are indexed rather than the pages that exist on a website? So for example, is putting a no index effective for calling a site's cruft? Yes, we do focus on the index content itself. We don't look at things that are not indexed because if they're not in search, then they're not really relevant for us anyway. So that's something where if you have a website and you realize you have low quality content on this website somewhere, then primarily of course we'd recommend increasing the quality of the content. If you really can't do that, if there's just so much content there that you can't really adjust yourself, if it's user generated content, all of these things then there might be reasons where you'd say, okay, I'll use a no index for the moment to make sure that this doesn't affect the bigger picture of my website as it's visible in search. And over time, I'll find a way to handle this in a more elegant way. So that might be an option. It's been a while since we've been tracking ranks on various search engines for specific keywords. We're on the first page of other search engines but nowhere on Google. Why is this happening? So I guess the simple answer to this question is we use different algorithms. All search engines use different algorithms and look at different things. So it's obviously kind of expected that the rankings would be different across different search engines. Crawling and indexing is also different across different search engines. So just because a website is very visible on one search engine, doesn't mean that there's anything weird happening with another search engine that doesn't show it at the same place. I think it's good to have a variety of search engines and sometimes you pick one that works best for you and you kind of stick with that. So that's perfectly fine. It's not that there's one single factor where we say we do it like this and Bing does it like this and Yahoo does it like this and maybe Yandex does it like this but rather there are on our side we have well over 200 algorithms that are affecting crawling indexing and ranking and I'm certain the other search engines have just as many algorithms, maybe even more that are also looking at crawling indexing and ranking and all of these could be looking at different things. If I'm writing a 2000 word text and a particular keyword density that's more than 4%, will you consider this keyword stuffing? It's a natural piece of content without thinking about stressing on any particular keyword. How does Google differentiate between stuffing and naturally written content if the density of a particular keyword is the same in both? So I don't know how you manage to do this but when I write a piece of text I never hit any specific word count or keyword density so my feeling is that perhaps this text that you're writing without specifically thinking about any particular keyword density or number of words is actually quite focused on number of words and particular keyword density. So that often that doesn't sound very natural to me. In general, I'd really recommend writing natural content because that's what search engines have learned to understand. That's what search engines are focusing on and that's something that search engines try to use to understand the text and to figure out is there something that's actually reasonable or is there something that someone artificially wrote to try to match a particular density or number of words. So from my point of view, I kind of step away from count words and keyword density and really focus on actual normal human content. We have a competitor having on page and off page. That's below our website that's still ranking high. The only thing it's superior is their age. After how many years does Google start taking into consideration the website to rank for competitive keywords? So I guess this is a kind of a tricky situation in any case, if it's in search or even just like a normal business offline, it's something where obviously if someone has been longer active in a particular business field, they'll be better known. There are lots of people that know this business that maybe go there directly. When it comes to search, we collect a lot of signals around the website and we've been collecting really good signals for a really long period of time and obviously that's something that's valuable for that website, for that business. So it's not so much that they have an old domain name or that they have an old business. It's really a matter of this website has been consistently performing really well. So that's something that Google kind of has seen as being really good. Obviously that doesn't mean that it can't change if you have something that is significantly better than what they offer then over time that will, should or generally is reflected in search. There are lots of these situations where one website comes out with something that's really fantastic or a really neat business model and suddenly businesses that have been around for tens of years, maybe even 100 years are struggling to keep up because of this new business, new website that's coming up with really fantastic good ideas. So it's not the case that you have to sit in the corner and wait it out for 10 years until your website is shown in search. It's really a matter of there's a business out there that's been doing good job for a really long time and you have to prove yourself that you as a newcomer are actually just as good or significantly better than that business. If we're not keeping alt text for a particular image then is something wrong in terms of Google's algorithm is having, let's see, empty alt text, does that negatively affect SEO? We don't want our images to rank in Google image search. Well, I guess if you don't want your images to rank in image search, then that's less of an issue. If you're not providing any context for your images you might even go so far as to say, okay, we'll block these from being indexed in image search if you really want to. But in general, since there are still people who are using screen readers to read websites, to interact with websites, both on normal desktop, laptop devices, as well as on mobile phones, the alt text does help them quite a bit to understand what it is on this page that they're actually seeing or that's being displayed on this page. So from our point of view, I think it's a good practice to have descriptive alt text for any image, even if you don't want those images to show up in image search. Does the HTML to text ratio matter? No, it doesn't matter at all. There is no ideal ratio. There's, from our point of view, as long as the HTML page is less than something like, I don't know, 100 megabyte, I don't know what the limit is at the moment, then we pick up the content that's visible on the page. And if you have a lot of CSS in the page or other markup, that's totally up to you. That can slow things down, of course, for your pages, but that's more a matter of something between you and your users. How to build canonical link for CMS website to avoid a manual penalty. So these, I guess, are very different topics that the canonical link is something that you can set up on pages to specify which of your pages is actually the preferred one you want to have indexed. That's something that a lot of CMSs do automatically. So that might be something where you could install a plugin if you don't have that off hand. And that has nothing to do with manual actions. Nothing to do with regards to web spam. So from our point of view, the canonical is something that affects how we index pages and manual actions is really based on the quality of the pages themselves. On our category pages, we're displaying the product titles below every product to provide details of the product to the customer. One product is displayed on multiple pages, increasing duplicate content on the listing pages. Is that a problem? From our point of view, that's up to you. That's something you can do. So what would happen in a case like this is we would rank those pages individually and if there are significant text that's overlapping between these pages, we might just show one of these pages in the search results for a particular user. So if they're looking for this product in general, we'll pick one of these and show it. If they're looking for this specific variation, we'll try to show that. If they're looking for that specific variation, we'll try to show that. So that's not something that you need to fix, but if you wanted to kind of say, well, instead of these variations, I really want a strong page just on that general product, then that's something where you might consider combining them. Why do famous bloggers not use description in their pages? I don't know. You'd have to ask them. I don't know of any famous bloggers who explicitly kind of remove a description from their pages. Maybe it's just not something that they focus on. Obviously, you can be successful online without using a meta description on your pages. Sometimes it helps to better understand those pages in search to show them the snippet. It doesn't affect ranking and it doesn't mean that you can't become famous if you use a meta description on your pages. In case I'm launching a new website and I want users to land on a new version, but they should be able to access the older version of a website for a month or so, what precautions and steps do you suggest? That's always a tricky situation. So in general, what I'd recommend doing there is trying to figure out how you want to handle this on your side. The ideal situation is that when you relaunch a website, you reuse the old URLs. So to kind of avoid the redirects, to avoid having us to reprocess the whole website to understand it again. Obviously, if you want to show both versions of that website to users at the same time and let them switch between versions, then that's not going to be that easy. So what a lot of people do is they do kind of A-B testing where they launch the new version on a separate subdomain and they put rel canonicals on those pages pointing to the old version, so that while they're testing things out, the old version remains indexed. And when they're ready to actually make the switch, to make the switch for good, that is then they replace the old version with the new version. So that's probably the easiest way to do that. What happens in a case like that is we will still crawl the new version. We'll see the rel canonical. So it might be that there's a period of time where we actually pick up some of the new versions of the website and also index those, but over time we will fold those together into the canonical that's kind of specified on your papers. I got a manual action on my website about a month ago and solved that issue and applied for reconsideration and it got approved, but I'm still not showing up in search. So that seems like something where sometimes it just takes a bit of time for this to bubble down. So I don't know specifically when this manual action was lifted or what exactly happened there. Let me just double check to see that everything is kind of lined up properly. Yeah, I think it looks like maybe 10 days ago it was reconsidered. So this is something where, especially if we remove a site completely from search, because of maybe pure spam reasons, I don't know your website, but if we remove a site completely from search which sounds like what happens there, then that's usually a sign that there's something significantly problematic with this website. And what happens there when we reconsider that and kind of bring it back in and say, okay, your reconsideration request was okay, then we will start re-indexing that, but that can take a bit of time. So I'd say like the 10 days so far is something that's kind of within the normal range. I would say sometimes it takes a couple of days, sometimes it takes a couple of weeks for a manual action to actually be visible, well, not the manual action, but the lifted manual action to be reflected in search. I'm facing a discrepancy in the robots text file on webmaster latest version. It shows two versions of the robots text file. One is the actual, the other shows everything is blocked. I don't know exactly what you're looking at there. So what I would recommend doing is posting in the webmaster health forum, maybe with, well, definitely with your URL so that someone can take a look and maybe with a screenshot of what exactly you're looking at there in Search Console so that people can kind of guide you and tell you what it is that's problematic or maybe everything is already lined up properly. So that would be my recommendation there. How does a manual action happen and how to avoid it? So manual actions are done by the web spam team and they're generally based on the webmaster guidelines that we have in our health centers. So when a website breaks a webmaster guidelines in different ways, then the manual action team can take steps there to kind of preserve the quality of our search results by taking appropriate action on a website. So I would definitely check out the webmaster health center, look at the webmaster guidelines and really kind of go through that and think about what you might need to stay on top of, especially if you're hiring an SEO, you're not really sure what it is they're actually doing. If I syndicate my content to another website with an iFrame, how can I ensure that the iFrame URL gets credit instead of the page embedding the iFrame? So the short version is you can't. In particular, if a page is embedded within an iFrame within a bigger other page, then it's possible that we will index that embedding page as well. And when we render that page, we obviously see your content because it's visible in the iFrame as well. And to some extent, we may say, well, this is the whole page with this content on it, we will index it like that. So we could show it in search results like that. In general, we do figure out this connection between the iFrame version and the embedding page. And we do try to show that properly in search to really highlight the original page rather than just the frame. But it's not something where you can say, I want to have both of these visible in search or generally on the web, but I prefer this specific page to actually be shown. So things you could do if you're in touch with the other website that's using the iFrame, having a rel canonical on that page, pointing to the actual content version that you want. Alternately, if you don't want your pages to be shown within an iFrame, there are options that you can put, I believe in the head of a page with a special meta tag to say that you don't want your pages to be iFrame. And that's something that browsers will respect, modern browsers at least, and that's something that we respect from our side as well, where if we can see that a URL is being shown in an iFrame, but actually that URL wouldn't work in that iFrame, then we will respect that for search as well. A bunch of people asking how to get in, looks like there's some already here in the hangout. And wow, Mihai is listening in while driving. That's a new one. What happened yesterday? Why did all the homepage, www versions get the indexed? Anyway, all are back to normal again. Just curious, I don't know. So sometimes things go wrong on our side and things break. And then people shuffle really quickly to try to fix things. So it sounds like something was broken and something is fixed again. So that's a good thing. So it's really like with any other website, it's important to stay on top of things to make sure that when things break in a weird way, that you have someone who's able to kind of fix things quickly. All right, someone just posted in a chat about the iFrame option. It's called xFrame options and needs to be set to same origin, apparently. So thanks for that. All right. Hi, John. So if you don't mind, can I ask my question? Sure, go for it. All right. So I mean, I'm seeing some of the sites is actually showing results from their internal search. So is it okay to like having pages from your internal search and it is actually doing well on search? Is it okay? So in our webmaster guidelines, we recommend not allowing internal search pages to be indexed. The main reason for that is a technical one in that internal search pages tend to kind of blow up the number of pages that we discover from our website. So we can take any random combination of words or characters and search for something on an internal search page and get a result back. And that kind of is inefficient for crawling and indexing. So that's from that point of view, we say you should block these internal search pages so that they don't cause any problems in search. Okay, thank you. All right. Let me, wow, so many cushions still. Okay, and some really long ones. What's Google's basic criteria for considering websites for featured snippets, Google Shopping, Google News? Is there any technical change required to enter? So I guess these are very different aspects of the search results, different services even. Google News is something that's manually reviewed, their criteria in the publisher center. Google Shopping is a kind of an advertising platform that you can use and featured snippet is when we take a snippet from your page and show that a little bit bigger in the search results. So that's kind of the normal snippet that we would always show for any page that we have indexed just slightly bigger when we think that it's more relevant for the user. What's the effect of two times redirection on a domain from a search engine point of view? So from our point of view, that's perfectly fine. What happens on our side is we follow up to five redirect steps in a redirect chain. If it's a server-side redirect, so 301, 302 redirects. And after five in a row, we will try to recrawl the next couple separately. So that could be, maybe tomorrow we'll follow from that fifth step and see where this actually ends. From our point of view, that kind of makes sense to avoid situations where we end up in endless loops or where we just follow redirects around like crazy, but don't actually get any content. So that's kind of why we limit things there. If you have one, two, three or four redirects after another, that's from our point of view not an issue, but obviously for users that does slow things down. So as much as possible, we recommend making sure that any redirects that you have on your site go directly from the original URL to the final URL that you want to have shown in search or that you want to show to users. Does a drop domain affect rankings and entity information? So I think this question is something like, I used to work for a company that had this domain, they moved to a different domain and now a spammer has picked up the old domain. Do we have anything to worry about? From our point of view, those are different domains. So it's not something that you with a new domain would really need to worry about. In general, I would recommend keeping the old domains and just keeping that redirect live for as long as you can. Sometimes after a couple of years that might get lost in, I don't know, especially bigger companies who is responsible for the old domains that I don't actually get any traffic. But as much as possible, if you can, just kind of keep things under your control to avoid the situation where it looks like you're publishing the spammy content because maybe someone has your old domain name and goes there directly. But in general, we do understand these are separate domains and treat them as separate sites. And I guess the next question goes into the opposite. If we buy a domain name of an older business, so a business that dropped their domain name, do we get any value out of that? That's something that also comes up every now and then. From our point of view, we try to figure out that these are completely separate sites and we try to treat them as such. So I don't think it would make any sense if, for example, you took your business and you moved to an address where there was, a couple of years ago, a really successful business and said, okay, I'd like to get all of the clients to come to me instead of the other business because I just happened to be located at the same address as the old business was. So from my point of view, that's kind of the way it should be working, that we should be recognizing that this is something different and not giving kind of a big bonus to the new site that is active on the old address. Do you need a custom alt tag for thumbnail pictures? No, you don't, but obviously it's a good practice to have alt text on images. And again, like I mentioned before, some people do use screen readers or enhanced browsers to highlight the alt text so that they understand what is actually shown in the image a little bit better. The second question is we'd like to reduce the number of internal links to 100 or 150 per page is no follow the best option here. In general, I'd recommend not using no follow for kind of page rank sculpting within a website because it probably doesn't do what you think it does. So what I'd recommend doing, whoops, okay. Seems like I accidentally got muted. Sorry about that. So which part of the secret information did I not get out this time? Let me go through the second part of the question again. We'd like to reduce the number of internal links on our page is no follow the right option. So my response there is if you want to reduce the number of links, then reduce the number of links. Don't use tricks like no follow or try to hide those links on your pages because probably that doesn't do what you think it does. And John, third part, yes. Yes, so regarding this question only, so I mean we have this let's say e-commerce website but in navigation we have plenty of URLs, let's say I mean 500, 600 URLs from the menu itself. So I mean it is a good practice to like only allow access to the URL which is actually relevant for the page only instead of let's say for the whole websites. For example, I am into the one particular section of the website and I'm just only allow one, this section links from the menu itself. So is it okay or is something like we should avoid such practices? That's totally up to you. You can do it either way. So this is something where I would maybe do AB testing. I know some of the bigger e-commerce websites, they reconsider this all the time from these gigantic menus that deep link into a ton of products to really small and simplified menus and they test this all the time for usability and of course for search as well. So I would try that out and see which one works best for your users and probably we can deal with that too. You mean, all right. John, a quick follow up regarding links in the menu versus links somewhere on the page and menu being site-wide. Do you kind of recognize that and say, well, okay, there are 600 links in the menu but it's the menu we know about, the menu we know about all those links. So it's not like you're not gonna have a crawl budget for that page in particular and you'll just see the unique links that you don't know about or are unique to that content or that page. Kind of. So we do differentiate between the primary content on the page and what we call the boilerplate, which is the header, the menu, sidebars, footers, those kind of things. And we try to focus on the primary content when it comes to indexing. For crawling, we do take into account everything else around the pages well, though. So that's something for like a crawl budget point of view. We do still take that into account with regards to understanding the context between individual pages. Obviously, if the same link is shared across the whole website, then that's trickier if it's in a menu and it's easier for us to pick up if it's within the main content. But it's not like the case that if you have a link in the shared menu, then we would not use that for crawling. Okay, and when it comes to a passing page rank and other ranking values, do you use links in the menu and links in the content in the same ratio, so to speak? We use them pretty similarly. We do still kind of differentiate between the two because sometimes it does make sense to understand that a little bit better, but it's something where we do pass signals to the linking page and try to understand the context between those pages based on the anchor text, based on the text around those links. All of that helps us a little bit. So if I want some of my pages, I want to signal Google that these pages are really, really important. Putting them in the menu is still a fairly good idea, right? You see it that is site-wide. Every page links to that separate page. So it must be important. It must carry some additional weight versus a less-linked page. Yeah, definitely. Okay, let's see. We have a website in German and in Dutch. Is a different TLD enough or do I need to use the rel alternate hreflang? So if these are separate websites and they have different content, then different addresses is enough for us. The rel alternate hreflang tag is useful if you have equivalent content on these pages. So if you say this page in German is equivalent to this page in Dutch, then that link between those pages helps us to understand that if we accidentally show the Dutch version to a German user, we can swap out that URL and show the German URL instead of the Dutch version. So that's kind of the usefulness of the hreflang. Google's advice for mobile first is to go responsive. So there's the same content and structured data markup. How important is that content and links are available on mobile exactly as on desktop? For example, breadcrumbs. So apparently lots of people remove breadcrumbs on mobile pages for some reason. So I haven't actually followed up on this with regards to breadcrumbs. I'm a bit surprised that breadcrumbs is such a big topic for mobile first, but it seems like something that maybe we should try to do an analysis on to figure out what exactly is kind of a good usability approach there. So with regards to content on mobile first in general, we do understand that the UI is much more limited on mobile. So things that are hidden or things that kind of come up when you click on them. So tabs and Sniders, all of those kind of things. That's something that we do use normally on mobile when we render the content for mobile first indexing as well. So I suspect that that would be absolutely no problem. The way you phrase this here that you have these breadcrumbs kind of hidden away on a mobile page. However, it might be interesting to actually double check with someone from the usability side to see if it really makes sense to hide breadcrumbs on mobile because my suspicion is maybe users on mobile like to use breadcrumbs as well. I don't know. If any of you have more information on that or more data on what you've seen, that would be interesting to hear from. Our brand. Hey John, can I pipe in real quick? Sure. So we had a reclassification request go through to the Google support team and we got an email that said that our website justlegal.com was showing up under safe search but it's still not showing up there. So they didn't reclassify it. The email said that they didn't need to basically go about fixing this in a faster way than asking for another reclassification. Okay. I think I looked at that with the team on, what was it, Monday or Tuesday? Yeah, you checked our website I think two or three ago. And they said this just takes a bit of time to actually be reprocessed. Especially if it's a site that used to be adult content for a long time, then this is something that can take a couple of weeks to actually be reprocessed on our side. And from their point of view, things are lined up properly. It's kind of categorized in the right way but it does take quite a bit of time to actually be reprocessed. And the reprocessing once it starts is done on a per URL basis. So you'll probably see some pages kind of be visible. I don't know, in the near future, I'm guessing like week or so. It would probably be a reasonable time and the rest of the pages would kind of bubble up after that. Okay, I'm just concerned because the email said that we've reviewed the site and found that currently the majority of the pages from your site are not being filtered out when safe search is turned on. There's no need to file a safe search review request for your site. So for some reason, somebody didn't pass it on to the actual reclassification stage. That's the answer. Longer, that just takes longer. So in our systems, I imagine you just hit a weird moment where it was already kind of lined up to be removed and not visible in the tools anymore. But for indexing, this takes quite a bit of time to be reprocessed. Okay, thank you very much. Sure. I understand this is kind of frustrating when you have to wait things out like this, but it's really not something that we do quite a lot of. So our systems aren't really tuned to handling these kind of changes in a really fast way. Let's see, our branch finder page currently shows links to the relevant branch pages behind dropdowns, depending on the region, are these links being discounted behind the dropdowns and should all branch links appear visible on the main branch page? What's kind of the best practice here? So from our point of view, what happens when these pages are not visible on the page when we load it? We will assume that this content is not the primary content of this page and with regards to our algorithms, we'll probably not focus on that specifically. However, if someone is explicitly looking for that content, we will know it is on that page. So we can still show that page in the search results. So that's something where it kind of depends on what your goal is there. If you're okay with the way that this page is currently performing in search, then maybe that's perfectly fine. If you feel that the content that's not visible by default is really critical for this page, then I would just make sure that that content is actually visible by default on that page. John. Yes. What if someone is rendering content via JavaScript and it's something, it's like the whole main content is not feasible. I mean, if you just disable the JavaScript, yet if you find those pages are ranking on top exactly on the first position, so I mean, like, what is it like? I mean, Google is able to read this JavaScript and pass value to those content or is something like that in the domain authority or something like that? We don't use domain authority. So we can render pages normally. So Googlebot is essentially a normal browser and tries to look at pages how they would be visible in a browser. So if JavaScript shows this content, then we will treat that content as being visible on that page. And like I mentioned before, if someone explicitly looks for that content, we'll still know it's on that page and we can still show that page in the search results. Okay. So I mean, you're saying either that pages can rank well in spite of like if the content is still behind, you can see why the JavaScript rendering or though you can see client-side rendering, it's happening, it's fine. So JavaScript rendering is not the same thing as content being hidden. From our point of view, if the content is visible when the page loads in the browser, even if it uses JavaScript to display that content, then that's visible content. On the other hand, if the content is not visible when that page is loaded in the content, then that's something that we would treat slightly differently. Okay, cool. Thank you. All right. Our website was online a long time without hreflang tags. So the wrong content got shown in the wrong countries. We've just added them now. How long does it take for the correct version to be shown? That's something I don't have any specific numbers for because it depends on when we recrawl and re-index those pages. So the tricky part here is recrawling and re-indexing happens differently per URL. Some pages are recrawled and re-indexed every day. Other pages take a couple of days or a couple of weeks, maybe even a couple of months to be recrawled. So if you apply this markup on your pages across the whole website, then for some of these pages, this markup will be effective fairly quickly within maybe a couple of days. For example, for your homepage, probably. For other pages, it can take quite a bit of time for the markup to be effective. Oops, someone has an echo somewhere. There we go. So that's something where probably some changes you'll see quickly, some will take a bit longer. We launched our multi-language help center and submitted to URLs last Friday. The Chinese help center appears in the search results but the previous Japanese and English ones are still the old version. So what can we do to kind of fix that? I don't know for sure, but I assume you have separate URLs for the different versions of your help center URLs. So if you have separate URLs, then this is more a matter of us recrawling and re-indexing those pages. So one thing you can do is submit a sitemap file with a new last modification date to let us know about those changed URLs so that we can recrawl and re-index them a little bit faster. And then the search results for this one query, which I don't know what it actually means, show the robots text file. Oh, okay. They show a URL that's blocked by robots text. So I guess there are two things, two approaches that you could take here. On the one hand, you could leave it like this if you feel that this is fine enough for you and that people can go to this page despite it being blocked by robots text. On the other hand, if you want this page to not appear in the search results at all, you need to allow us to crawl this page and then use the noindex meta tag on the page itself to tell us not to actually index this URL. So that's something that's sometimes confused. The robots text file doesn't block indexing. It only blocks us from seeing what is the content on that page. Would it ever in the future be possible to manage your own search results with filters? So I don't know. So he goes into a kind of detailed example where you filter out different things and you say, I want to look at this, but not at this. It's something that I believe we've tried in various variations and I'm sure we'll try again in various configurations as well over time. I don't know what specifically has been holding things back there or what is affecting that. I imagine a large part of the difficulty here is to make it so that it's actually really usable and helpful for users rather than just a big page of options and dials that you have to fiddle with to get what you're explicitly looking for. So that's probably something that's a bit trickier. If you have really good ideas there, that'd be something I'm happy to take to the team. You could also try things out and maybe do something with a Chrome extension yourself and see how far you can take it on your own. Maybe there's something neat that you can do there. Hi, John. This is Amar. Hi. Yeah, I have one small question about the web chronicle link. Google, exactly what kind of links is there indexing? A chronicle link is the main or the URL which you are seeing in the address bar? Sometimes both. So we use the canonical URL as a signal to understand which one we should be indexing. But we use a bunch of other signals too, like the links within a page or redirects on a page. And all of that gets combined. And based on these different factors, we say this is the URL that we would index. Because we got the Webmaster signal alert. A lot of pages are showing duplicates. These address URLs are a different type of URL. Same URL, it is an uppercase, small case only. Changes. In URL, it changes like uppercase, small case. So we confuse that whether you manage the address URL or you'll manage the directly chronicle link. So upper and lower case is a different URL for us. So it might look the same, but it's something different. So we crawl it separately. Usually, this is a sign that internally the linking within the website is inconsistent. But sometimes you link like this and sometimes like this. And then we crawl all variations. And we have so many copies then of the same content that we think it would be more efficient if you only link to one specific version of that URL. So will you use the 302 or collinically is the basic URL as a basic URL? I would use both if you can. Use a redirect and the rel canonical. Sir, one small question, that collinical links should be human readable or you should put a number also. It's OK. Whatever you want. Because indexing will happen through the collinical link. Correct? We try to do that, yes. But also with the links within your website. So I would pick a URL format that works for you. Sometimes that's human readable. Sometimes that's with a number totally up to you. OK, fine. Thank you, sir, thank you. Sure. All right, we have a couple of minutes left. And I have this room for a little bit longer. So we can go on a little bit more. Let me double check the chat to see if we can do questions that we can do here. I observe that Google doesn't like to follow backlinks with anchors. What about old genuine backlinks? It's difficult to remove. These is considered spamming. So if there are old links to your website, and since you're talking about spamming, perhaps maybe an old SEO from before, place those links there. And they're kind of unnatural links. And you can't remove those. Then you can use a disavow tool where you can upload a file to search console to say, these are links or these domains are linking to my website and I don't want them taken into account. So that might be an option there to use a disavow file for that. There are certain keywords that are primary target. We put them once in the H1 as well as in the title. But our website doesn't rank for those queries. We rank for other long-tailed ones. What could be the problem there? So we do take content from the page into account for ranking. But we take a lot of other signals into account as well. And sometimes there's a lot of competition for these keywords as well. So that's not something where there's a kind of like simple trick that you can do to rank on the first page for any particular keyword. You really need to work on your website over time and make it kind of the best website of its kind for those keywords rather than just putting them in H1 and in a title on a page. OK. Well, long question. What could be the possible reason for not ranking for generic show terms like Game of Thrones in Google India? Given the current scenario of our website, online streaming platform, official partner show built on AngularJS ranks pretty well for what show name online related searches offers on video content. What could you suggest to help us rank more? So I don't know your specific case or your specific website. But it feels kind of, I don't know, iffy with regards to maybe the quality of the website or the officialness of the website, especially if you're saying you want to rank for the query Game of Thrones. From our point of view, we'd really like to have a good source when it comes to queries like that. So not just the website that happens to link to random torrents of movies that are available or that are online there. So that's something where I wouldn't focus so much on technical aspects, like is this built on AngularJS or is this using proper site map files and things like that. But really focus on making sure that the quality of the site overall is really the best of its kind. OK, you go on and say the official partner, actually. OK, I guess I have one question. Please one question. But it's still the case where the website itself really needs to provide a lot of value, where we can, over time, recognize that actually this is a good website for this specific query. Another thing that might be playing a role for a website like this is a lot of these video platforms have restrictions with regards to the availability of the content for different locations. Since Googlebot primarily crawls from the US, if you're blocking users from the US from seeing this content, then Googlebot would have a really hard time to actually index anything from your website, because it probably can't access it. So that might be something to also look into. Sir, I have one. Hi, sir. I'm managing a bookmark site here. OK, a lot of from different, different sites. All maximum content is copied. So it is the main reason that site is now recently spammed or manual spammed by the Google. Can be the case. If your content is copied from other sites, then the manual web spam team might say there's little reason for us to actually index it separately. But it is now, it is recently they removed from it. We try to resummit, again, they rejected. Because this is a purely bookmark site, like you can say interest how they are doing. People will bring the small, small content piece into a one place, and they start converting. Their conversion is going on. So there is a lot of pictures and a lot of because of manual spammed. And there is a chance as to how to recover it. When it comes to the manual spam action, especially if it's removed completely from search, then that's something where you need to take significant action to make sure that the primary content on your website is really unique and compelling. So primary means, because this is a user, there is most of the people in the Webmaster Help forum for that. But it's also something where, since you're saying this is user generated content, from our point of view, the content that you publish, it doesn't matter where it's actually coming from, you're publishing that content and saying this is my website. So even if that content is generated by users, it's still your website. And you're saying, this is what I want to be known for. So if users are submitting bad content, copied content to your website, then you need to figure out a way to say, well, I don't want to have this reflected in search. Maybe no index to low quality content. Find a way to recognize that on your end and to block that from the start. Thank you, Sethink. All right. Wow, still so many questions left. Maybe we can just ask more questions from you since time-wise, we're kind of over. But maybe that's something some of you still have on your mind. Yes, John, I want to ask one more question. It is regarding the AMP pages. So basically, we have AMP. And we are thinking to solve these AMP pages to our AdWords sort of thing. And maybe we can use that once. Is it possible we can do that? And if you can implement the Facebook pixel and the AdWords conversion, codes, tracking codes, something like that? I don't know for sure. I believe there is a way to use AMP pages with AdWords, for example, or as general landing pages, since they are normal HTML pages as well. I don't know about the retargeting or the Facebook pixel things. I assume that's something that you could implement yourself as well, because AMP is open source. And other people are probably using these as well. That might be something to pick up with the AMP team directly. They have a GitHub site for the AMP project where you can add questions or add requests. And they can probably guide you towards a solution for that. All right. Thank you, John. OK, I have a couple of questions. OK. So one is related to one of the things you answered, because I actually just discovered that my microphone was not working. But you said that Google will render, and even if there's something that JavaScript does. But what if JavaScript will change the rich snippet markup? Is that taken into consideration? Yes. It's something where we use the rendered version of the DOM, and we try to use that for any structured data that we extract there. So it's kind of a tricky situation in the sense that we will initially crawl and index the HTML page, the raw HTML, and then in a second step, we do the rendering. So if the structured data markup changes significantly between those two, it's possible that at some point in time we have the raw HTML version, and at some point in time we have the other version. So if it's just a matter of no markup in the HTML version, and then there is markup in the rendered version, then that's usually fine. But if it's like there are review stars here and there are something completely different in the rendered version, then that could be confusing. And my second question is, because I know that when the Googlebot crawls the page, if there are URLs in JavaScript, it's possible that it will crawl them as well. For anchor links, I know that I can add a real no follow, so that's easy. But what about JavaScript? How can I tell the Googlebot not to crawl these ones? So if it's a URL just in the JavaScript code, then we will try to follow that, but we don't pass any page rank to that. So it's kind of automatically no follow. Whereas if you're using JavaScript to create an A element on the page, then you can choose to add no follow or not to that A element in the DOM. And that's what we will respect when it comes to kind of passing signals. OK. Thank you. I have another question, but it's in one of the comments, so I'll let you get it. Go ahead. Oh, OK. Thank you. It's about sitemaps I had already asked it before, but I think there was something that I didn't add to my question previously. So basically we only have authority or allowed to modify content on three separate directories on the site. So we don't have the authority to do anything on the root directory or anything. So I have indexable content on some of them and I have different sitemaps because there are a lot of pages, so I'm limiting them and I'm including a sitemap index. According to the sitemap.org instructions, it should have the links inside each sitemap file to be relevant to the URL. But what about where to place the sitemap index itself? Do I do more than one sitemap index or do I just put it in one of these directories and it can't take certain files from different ones? So for the sitemaps, we take two things into account when the sitemap file is in a different location. On the one hand, if it's mentioned in the robots.txt file, the sitemap file can be anywhere and it can have a sitemap index file can be anywhere. On the other hand, if these different locations are all in the same Search Console account then we will allow that as well. So if you have different sub-directories but they're all verified in the same account, then you can put your sitemap index file in one or your sitemap file in one and cover all of the other sub-directories from theirs. Thank you very much. There's another question in the chat. Recently I observed that Google Search is displaying different results in mobile and desktop. Have they started with the mobile results? So we've been showing different results in mobile and desktop for quite some time now. Essentially with the mobile friendly update that we did two years ago now, that's something where we started seeing pages that are mobile friendly. We would promote them a little bit in the mobile search. Another aspect that's been visible probably since a really long time is that we show different elements in the mobile search results. So it's possible, for example, that if you search on mobile, we'll show a map and if you search on desktop, we'll say, well, maybe the map isn't the most critical element here. We'll show that in the sidebar or further down or maybe not at all in the search results. So those kind of differences are really common as well. But in general, we do use the same ranking algorithms on mobile and on desktop. We just tweak things around a little bit based on what we think makes sense for the individual users. Hi, John. Yeah, it is one question. Yeah, one question is always there. A lot of signal you are taking care of for the ranking. Hello. Sorry, I had the second part. Yes. Yeah, yeah. There is a lot of rank signal nowadays. More than 200 signals are there to rank a website in Google. So can I know what are the exact the major signals you are taking care maximum to? No, sorry. We don't share all of our signals. Majority in the outlook wage, what are the major signals? I can't share them. I don't know. It's something that, from my point of view, doesn't make sense to focus on as a webmaster because we try to reflect what is useful and relevant content for users. And that's something that changes over time. And focusing on this signal looks at five words on the page like this. Therefore, I will put five words on the page like this. It doesn't work that way. Also, a lot of these things, they differ across the search results. So it's not the case that your competitor has five links like this and this particular text on the page, and you have something completely different on your page, but it triggers completely different signals from our point of view, but we show them in the same search results and maybe one is on top or the other one is on top. It's not the case that you have to trigger all of the signals in the same way as other people. So these signals and their weights, they vary quite a bit across different search results. Thank you. Yeah, I think it would be a weird situation if everyone had all of the search ranking signals that Google uses and was able to kind of artificially adjust their site to maximize those because then we'd have to figure out something new because if everyone is kind of just artificially tweaking their pages to rank number one, that doesn't work. We don't have that much room on top of the search results. Thank you, Dan. Thank you. All right, and I guess to close off one really important question from the chat is it true that 301 or 302 is harm SEO? No, it's not. So I don't know, where do I start? So with redirects, be it a 301 or a 302, they are from our point of view, a signal that we use in picking a URL as a canonical. So we have different URLs that we discover over time from a website and we might see the same content when we try to call those URLs and we need to figure out a way to see which one of these URLs is the right one to show and redirects is one signal that we use, rel canonical is something we use, internal linking, sitemap files, external linking, all of these things kind of add up and we pick one URL to show in the search results. So if you're using a redirect to tell us which one you want or if you're using a sitemap file or rel canonical, all of those are options and it's not the case that you harm your site by using a 301 or you harm your site by using a 302, it's more a matter of telling us which one of these URLs you want to have index and depending on whether or not those signals are all the same or kind of conflicting, then we will follow that signal and say, okay, instead of this one, we will index this one to try to follow your lead based on the redirect or maybe we see the redirect but we see internally everything links to the other URL so we'll say, well, maybe they're not sure with the redirect and maybe we'll pick this one after all. So it's not that they harm ranking, it's just kind of a sign that which URL do you actually want to have index and which URL do you want to have kind of shown in the rankings? John? Yes. We have this similar sort of situation where we have these two similar pages and we implemented the 301 redirect but I mean after implementation our page rank dropped from the page one so basically it was not in the first 100 results. So after two, three weeks of observation we removed the redirection and then the start page appearing on the first page so I'm just wondering if I'm saying that the pass value everything on the one page and then suddenly Google sees that this page might be not as useful but for us it is the most relevant reason but still the page ranking was gone so what might be the possibility? I don't know. It's hard to say without having a clear example where we can actually look at but in general if you redirect from one page to another that's something that we would pick up and we would just swap the URL as a canonical. There might be situations where maybe the canonical that you choose is something that's broken that doesn't work well for search maybe it's removed either on your side or on our side for manual web spam reasons and then obviously if you redirect to that URL then it won't be shown. There are lots of these variations but it's not the case that the redirect is causing the problem it's more a matter of that specific URL not being something that we'd like to show in search. So what I do there is maybe post in the Webmaster Help Forum and get a double check from some peers to figure out is everything from a technical point of view set up properly? Do you have the URL parameter handling tool set up properly? Which URL is actually being indexed? All of those things can kind of add up. But it's definitely not the case that we would say you change your URL with a redirect therefore we will demote your website in search. I can't see a situation where that would make sense just based on the redirect alone. All right. All right. Okay, so let's take a break here. We've gone a bit over with the time. I hope that was okay with you. Seems you're still all awake. Maybe time for a second coffee. Depending on where you're located and whether if you drink coffee or not. In any case, I wish you all a great weekend and we'll see you again in one of the next Hangouts as well. Thank you then. Bye, everyone. Bye, John.