 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Webmaster Office Hours Hangouts, like these. As always, if you're kind of new to these Hangouts, feel free to go ahead and ask the first question. Jump on in. OK, I can go ahead, I guess. John, I have a question. It's regarding low indexation rate in our site maps. And I was trying to figure out what the problem was. And I just figured out that one of our site maps had a key word that was in a different language because we have web shops set in several countries. And then those URLs were direct to the correct country. But probably that was one of the reasons that they don't really get indexed, I hope. Is there any other impact? So, I mean, if they're redirecting to another country, then probably they're redirecting to the right country. They just have a URL that has a word in Spanish, but it's the German web shop. And it's just redirecting to the right shop. Everything is fine. It's just that the. Oh, no, I think you got muted. I think you just need to unmute. Can you hear me now? Yes. OK, no, the key word is in Spanish. And it's redirecting to the right shop, which is a German shop. So I'm just trying to figure out what the possible impact is. Is that the reason that's not really my assumption is, but I just wanted to double check. Yeah, so the sitemaps indexed URL count is based on the exact URL as you have in the sitemap file. So if that exact URL isn't indexed like that, then we wouldn't count it there. So if you're redirecting to a different URL and we indexed the other URL instead, then that wouldn't count in the sitemaps count. So that's probably OK. I generally try to be as consistent as possible with the URLs that you want to have indexed. So use those URLs in the sitemap file, do the redirect to those URLs, have the rel canonical set up to those URLs, internal links, all of these things. So the more consistent you can be, the more likely you will use those and the better the statistics will be in the sitemaps count with regards to what we actually indexed for your website. OK, OK. Because this, as I said, it's a really big sitemap and it's like zero number is indexed, so it's probably. Yeah, so if they're all redirecting to a different URL, that's probably because of that. They do, yeah. OK, OK, OK, cool, thank you. Sure. Any other question before we run into the submitted questions? Nothing? OK, that's fine too. I'll ask a question. All right, go for it, Rob. In Webmaster Tools, we have on the UK site, we had some warnings that we have duplicate video descriptions and duplicate video titles in the sitemap. But when it actually lays them out, none of them are the same. OK, so the sample URLs aren't correct? No, it says we've detected duplicate video titles in your sitemap. Then on the right, it gives the examples of those, the ones that are supposed to leave the same, but they're not the same. OK, so that sounds like good feedback for the video team. If you can send me a screenshot of that, that would be really useful. OK, no problem. John, I have another question. I submitted a question for you. Relating to blanked cache pages, and what could be the main causes of those please, from your knowledge? It really depends a bit on the website itself. So the cache page usually reflects what we picked up from the raw HTML where we crawl. So if you have a JavaScript-based site and the JavaScript doesn't run when it's on a different host name, which is sometimes the case, depending on the way you have the JavaScript set up, then it can happen that these cache pages look empty. So that might be one thing there. So maybe when we render the page, we actually get the full content. But the cache page is really the raw HTML. So if the JavaScript doesn't work there, it would look empty, even though we index the full content. You're still indexing, so you don't think it should affect ranking at all? I mean, it sounds like it's just the cache page that you're worried about. If we didn't index the content, you would probably be panicking. And my website has disappeared. So if you're just looking at the cache page, that sounds fine. So you can do a site query and just include some of the words from your content. And you'll see quickly, can we index it or not. OK, perfect. Thank you very much. All right, so let's run through some of the submitted questions. As always, if you have more comments or questions to the questions or the answers, feel free to jump on in. And hopefully we'll have a bit of time towards the end to cover anything else that comes up. So question from Glenn regarding low quality content on a website and what the best approach there is. So I think there have been multiple hangouts where we say one thing, and then conferences. Sometimes we say something else. And it can be a bit confusing with regards to understanding what you need to do. And I think this is something where you really need to take a look at your website and take a look at your specific case. So in general, when it comes to low quality content, that's something where we see your website is providing something, but it's not really that fantastic. And there are two approaches to actually tackling this. On the one hand, you can improve your content. And from my point of view, if you can improve your content, that's probably the best approach possible, because then you have something really useful on your website. You're providing something useful for the web in general. So that's generally when we talk with the quality engineers, that's what they tell people to do is like, well, you clearly had a reason for putting this out. Now, be serious about the content that you put out and actually make sure it's useful. Make sure it's high quality. So improving it is probably the best thing to do in general, because in the end, that's extra value that you have for your website. Probably it's something you know something about, or you have someone on your team who knows something about it. On other hand, if you can't improve the quality of that content, because it's just so much, or maybe you auto-generated all of this content at some point, and you can improve some small fraction of it, but a large part is something that you can't really change at all, then that makes sense. Perhaps to be consistent there as well and say, well, I can't do anything about this. So I'm just going to get rid of this and clean up. And cleaning up can be done with no index, with a 404, kind of whatever you'd like to do there. So both of those are, I think, valid strategies. Of course, if you can make your content better, I think that's always the best approach. All right, if by mistake we disavowed some good links and the website lost ranking, does it get cleaned up if I remove the disavow file? Yes. So if you remove a URL from your disavow file, the next time we process it, the next time we process those links, we'll pick them up and count them again normally. I assume, however, if you're seeing significant visible drop in website ranking that it's probably not the case that there are one or two bad links in your disavow file or good links that you accidentally included there, but rather that there's a more broader systematic problem, perhaps, with your website or its quality that has resulted in this drop in ranking. So unless you went out and disavowed all the links to your website, in which case you will probably also see a drop, then probably that drop in ranking is not due to individual URLs that you accidentally put in the disavow file. So my recommendation there would be to take a look at the disavow file overall and think, is this kind of reasonable or not? And if it's kind of reasonable, then don't sweat the details too much and instead focus on the rest of your website and really work significantly to improve the quality of the website there. Then some comments on Glenn's question. Give us a rough idea of when the new Search Console will be fully rolled out. I don't have any timeline. So as always, at Google, things can change fairly quickly. I don't see this happening this week, so definitely not this week, but I don't think that's what you were looking for. I would keep an eye out for this. I suspect there might be further iterations with the testing as well. So stay tuned. John, just around Search Console, do you think there'll be any point when we'll start seeing voice data search coming through in the console? Because at the moment we have video and obviously normal natural search, do you think we'll ever get voice appearing as the data set? So voice queries aren't already in Search Console. They're just not split out separately because voice queries, essentially when you search on voice on your phone or with an assistant device, that's seen as a normal query. So we would show that in Search Console as a normal query. I don't know if we would split that out separately at some point. I think it's a question also of what extra value does it provide to have this information? Is there something that site owners can do with this information that guides them to create even better content on their website or help them figure out where specific issues are on their website that they need to solve? That's hard to tell. But I think it's generally something that we've heard from a bunch of people. So I wouldn't be surprised if it comes at some point. But I don't have any timeline on that. And I can't really promise that it will actually happen because it really depends on what's the value of an extra feature there, of extra functionality, extra complexity for the team here and also for the users outside. What should they be doing with this data? That's different than normal web search queries. Nice hair, John. Nice cap, Baruch. Thank you. So if a website has more than 70% of its pages set to no index and follow, is this a bad signal for Google? I'm asking because I recently no indexed 1,500 product pages within a category and lost the ranking for that category. So it seems that the products made the category stronger even when the products are quite similar in description. I think there is no absolute count on what is a good number of pages to no index or not. So 70% is really, it depends on what you're trying to do there. With regards to ranking, of course, if you remove a lot of pages, those pages won't be able to rank. So by no indexing pages that used to get traffic from search, that information would be gone now, and we wouldn't be able to show those specific pages. So that's something that might be playing a role there. Also, you mentioned that these products are quite similar. So it seems that you're kind of using a no index there as a way of canonicalizing similar products. What I would do instead of a no index there is really use the rel canonical directly so that we can actually understand that all of these variations are actually one specific product, and that one specific product is the one that we should be showing in search rather than all of these subtle variations. And it might even make sense to make that one kind of main product page have all of the information that you have on all of these variations of product pages as well. So if you have different product pages for different colors, for example, maybe it makes sense to have one product page for that product and just list the variations of colors rather than having separate pages for each of those colors. So those are some things to kind of think about. It really depends on your specific site. But in general, just putting no index on pages doesn't mean that the rest of the pages on your website that don't have a no index would be seen any less valuable. All right. Long question. With regards to implementing HTTPS, so over the years, the website has lost a lot of traffic, and then they implemented HTTPS. And the website has lost further traffic since then. So I suspect this is something where it probably makes sense to start a help forum thread, to look into the details there, because there's so much information in this question that is really hard to go through in a live type of hangout. I think one of the things that kind of stands out to me there is also that this looks like a business directory website. And it feels like over the years, those types of websites have kind of lost in importance for many areas. Gone like this. I think that people, when they search for a type of business, they want to find that business directly in search. They don't want to find a directory that has even more listings and more categories and more things there in the search results directly. So that's something where I kind of understand where over the years, the traffic to these kind of sites would drop unless those websites are really doing something amazingly fantastic that really kind of brings people together and is something that people would love to recommend in the search results or on their own website. So that's something kind of to keep in mind in that over the years, the trends on the web, they shift. And what used to be really valuable maybe 10 years ago on the web maybe has different viewpoint now or is seen differently by users and by search in general nowadays and isn't as relevant as it used to be. So these things can shift. The other thing that I see there is that you list a bunch of domain names which seem to be for individual areas, individual countries, I don't know, maybe even individual regions within those countries. And that also hints that you're providing a lot of content on a lot of different domains, which means you're kind of diluting the value of each of those pieces of content. So I'm kind of worried that on the one hand, you're targeting something that's perhaps not as relevant anymore. And on the other hand, you're diluting it across all of these different domain names. That means it's even harder for us to say, well, this is actually a really fantastic website. So those are two aspects that, off the hand, I find when you're reviewing your site's position now. But I'd really recommend going to one of the Webmaster help forums and getting some tips from peers, from other webmasters with regards to the type of business model that you have set up, the type of website that you've put together, the structure that you have there with all of those different domain names. Does that really make sense or not? Maybe you have really good reasons to set it up like that. And you need it like that for policy reasons. It's hard to argue with reasons like that. But maybe there are also things where you need to kind of take a step back and think about your business overall and think about where you think this will be in the next couple of years, where it will be in 10 years. Will these type of directories still be relevant at that point or less so? So overall, I think that's really tricky situation and no easy answer for me. And I'd really recommend getting some tips from general Webmaster peers rather than looking at this from an SEO point of view and saying, oh, I implemented this ranking factor and this ranking factor and this ranking factor. Therefore, my site will go up 90% in the search results because that's really unlikely to happen. I just added some SSL and traffic is exploding. Well, that's good. But I think for many websites, just tweaking some things, tweaking the titles, adding HTTPS, won't really result in that site jumping up in visibility like a couple of pages. That's what I meant, yeah. OK, cool. All right. You recently said something like the URL parameter tool provides suggestions. I could reduce how often a URL is crawled. Do the settings for what a parameter does, so pagination, resorting, also affect crawl priorities? Not really, no. So the settings for what the parameters do is more of an information for us so that we can better understand what various parameters do on a website and how we can try to recognize that perhaps in a more automated way. Kind of telling us to use this parameter is like a sign whether or not we should crawl it as much as we've done in the past or not. But what specifically the parameter changes, like language, pagination, sorting, filtering, those kind of things, that's usually less of something that we'd watch out for explicitly for that. Do you guys have a system where you recognize that a site hasn't changed in like 13 to 14 years, like as you're moving with the mobile index and so on? But is there a system where you can recognize that the site hasn't changed in so long and you would send them a message if it's just a site that doesn't want to change like an old school site? Because I've seen one in Canada. And yeah, it's just been that long. Yeah, I don't know. Are you talking about my blog? Maybe I should post something on it. Are you? I don't think I've posted on it since not quite 13 years. So maybe I'm still good. Haven't changed the site. I mean, do you guys ever going to alert them or? I don't think we'd alert them because probably if they're not changing their site for such a long time, they wouldn't be waiting for a message from Search Console to actually start doing that. So I don't know. I think there's definitely value in old sites that haven't changed a lot. And there are a lot of technical papers that are out there or other kind of documentation archives that are just the way they are. And they've been like that for years and years and years, but they're still important resources. So just because something doesn't change a lot isn't necessarily a sign that they're doing something wrong. I think the tricky part is we kind of need to understand that not all sites will jump on the newest fad. And they might be desktop only and use a tables-based layout. And if they're relevant, we should still return them to Search, even if they're not mobile-friendly, for example. So the core that comes once a year, the design they'll go, it comes once a year. And you do write something based on design. Not specifically. At least I'm not aware of that. But it's something that, obviously, users perceive. And sometimes when a user runs across one of these sites, they might see something so old-fashioned that they say, oh, this is like a serious resource. It's been like this since at least 50 years. And it's something that they can kind of process with that kind of frame of mind. But I don't think sites kind of by design need to update their template every couple of years. I think for maybe small businesses or anything where you want to appear that you're like active with your business, that you're actively doing work on your website, that this website is not obsolete, that the opening hours are still OK. I think for that, it probably makes sense to show that you're kind of staying with the times. But anything like been the way it is for ever and ever, I don't know, that seems like something that is just the way it is. I wouldn't really force them to update their pages just because they haven't changed that much recently. Right. But it's just that because you want to basically also improve the first top 10 results, the presentation. I mean, you're presenting it to your users like we're presenting in the restaurant. So you do want that the top 10 performers would look good on your stage, right? I think it really depends on what people are looking for. But yeah, it's something like people have very different needs and different type of information that they're looking for. So I wouldn't say that they need to have a modern layout. And I mean, obviously, there are things where if people go to your pages and they really can't read your content because it's written in a way, I don't know, as an image that's built for like widescreen desktop pages and you can hardly read it on mobile, then you probably will have trouble getting them to convert unless you have something that's really compelling that's really only available on your website. So I mean, it depends really on what you're trying to do. And in some cases, keeping an old design is more of a hindrance. In other cases, it's the way there is. And people are really keen on getting that information regardless of what you have there. OK, thanks. Googlebot is crawling a lot, a URL that is used for constructing AJAX URLs, which are not useful to users on their own. The URL appears on JavaScript code and the website. Should we upfuscate it or block it in robots.txt to avoid Google wasting time on it? So I suspect what is happening is when we render pages from the website, we run across this URL and we pick it up and we try to use it for rendering. So it's probably not the case that we're crawling this URL for inclusion directly in web search, but rather that it's a part of a page or part of maybe many pages on your website that you have there. And we're crawling it as we render those pages. So some things you could do if this is something that doesn't change so much. For example, a JavaScript file, make sure that you're using appropriate caching headers on those files. Sometimes there's simple HD access lines that you can add to your server configuration that let your server add these caching headers automatically. Make sure that when you're embedding it within JavaScript that you don't have something like a timestamp or a hash associated with the URL so that it's really one clean URL that we can cache and that we can reuse over and over again as we re-render your pages. So those are kind of my recommendations there. If this is something that's actually pulling in content or changing the way your pages look, I would definitely avoid blocking it by robots text because then we wouldn't be able to render your pages properly. I have a site with a blog section. Is it OK to implement AMP for the blog section only? Yes, of course. You can implement AMP on a per-page basis. I think for trying things out, it probably makes sense to use at least a significant part of your site for this so that you have some real numbers to compare to. But if you have a blog on your website and you can just throw AMP plug-in in there to turn that on, then why not? Go for that. That's probably a really easy way to do. Does the Google ranking signal change as per query or as per site or as per URL or all of them? Yes, of course, all of them. So ranking depends on what you search for, the pages that we have on your website, and the content that you have on your website. And of course, where the user is located, personalization, lots of other factors. So all of these things flow into the ranking that a specific user ends up seeing. How can I see local search results for location? I'm currently not in, including advertising. When a client lives in X place, in X country, how can I see exactly the same results that the client sees? Sometimes I see ads from my own country. Oh, how you can handle geo-targeting for ads. For search, I believe you can go into the advanced search settings and add some information there. But I don't know how ads does this. Maybe within the AdWords tool or Preview tool, you can look at that. For anything else, you'd probably need to have a local IP address. We're seeing that our site has blank pages in the cache. I think we looked at that briefly. How important is site architecture? Would we benefit from adding new category level folders to a flat level architecture? I think if you're just shuffling things around within your website, it's unlikely that we would be able to pick up any new value from that. If you're setting up a new website, then I'd look into providing a clear structure on your website. I think that definitely makes sense. But if you're just trying to improve things on your website, then just by shuffling the URLs around, I doubt you would see any visible change in there. You would definitely see a strong fluctuation as we have to reprocess and re-index all of those URLs that are involved there. So you'd have a lot of fluctuation in there in pretty much the same situation as before. So at least from my point of view, unless you're just looking for something to spend time on, I would focus on some bigger changes on a website. If content B is visible on desktop, and not a mobile, will it impact desktop ranking? With the mobile first indexing, yes. So we will take the content that's visible on the mobile version of the page and use that for ranking. So with visible, I mean like within the HTML or within the rendered page. So if there are some aspects where you have to use a tabbed UI to actually see the content, and it's already loaded, then that's fine. But if it's not loaded at all on the mobile page, then we wouldn't be able to take that into account. Internal duplicate content within pages of my website may negatively affect the overall quality of a website, assuming that the content is unique. I'm not quite sure I understand the question. In general, if you have internal duplicate content on your page, on your website, that's something we can deal with on a technical level. We can recognize the duplicate content, the duplicate content blocks within pages, and treat them appropriately, either by folding the pages together and saying, well, all of these variations are the same thing. We could just index one version, or by indexing them separately and then folding them together when it comes to showing the search results. So usually, that's less of an issue and more of a technical issue with regards to how we handle things on our side. Obviously, the cleaner you can make your website, the better, the fewer potential issues you can run across. But overall, we can do that. But overall, we can deal with duplicate content within a website really well. It's been a part of the web since a really long time. How can I do searches to see how I'm indexing in other countries? Do you have in mind any restricted search to location only? For the most part, I believe we just index the same content from all countries. So there are very few exceptions which are mostly around legal issues. But for the most part, if it's indexed in one country, it'll be indexed in all other countries as well. There is no special way to define and say, this page is only valid in the US and this page is only valid in another country. But rather, we index that page. And if we can index it and show it in search, we'll show it to users everywhere. So John, just around that, in Google Search Console, it gives you a targeting for which country? So does that be relevant or? No, that remains relevant. So the targeting is not that it removes it from other locations, but rather that we know it's more relevant here. So if someone is searching for something local in that country that you have specified, then we can show it a little bit higher. So it's not that it's removed elsewhere. It's just that in that country that you have specified, we can show it a little bit higher if we can tell that someone is looking for something local. It's kind of funny, though. I'm on Google.com. And then I'm searching for certain brands. And then it's showing me the Canadian version on the .com version. And so the .ca ones are higher than the .com ones. Is this because? Like you're in Canada. No, no, what I'm saying, this just happened before. Four days ago, it wasn't like that, this switch, right? We made a change there that I think we posted about on the official Google blog with regards to how we deal with the country code domains when it comes to kind of the user-visible side of search. OK. So I'll have to switch. Yeah, check out that blog post. The one thing where this is a bit different, the kind of country limiting aspect is when it comes to videos. That's one place where we have seen that there are sometimes significant restrictions on where videos are visible or not. And with a video site map, you can define which countries your content is relevant or available in and which ones it isn't. And that's something we can use for video search. But for normal web search, there is no mechanism to kind of say, don't show it in these countries. Only show it in these countries. In music, you don't know about that, but the law is there. If you're watching a copyright or whatever, a specific record label is blocking. So you guys kind of figure that out. It's kind of annoying when it says, this video cannot be shown. And you're like, uh-oh. So did you guys? I think that's specific to YouTube, though. So I don't know how we deal with that in web search in general. But yeah, that's definitely annoying. Man, we see a lot of that here. These legal issues make life complicated or interesting. We'll call it interesting. My website is sometime on the first page of many keywords and sometimes on the last page. My website is redirecting from another one. How could I improve my ranking? Yeah, I don't think there's one quick trick to improve your site's ranking to always be on the first page of the search results. I think if that were the case, we would have the first page of search results filled with thousands of sites, which wouldn't really be possible. So I don't have any one simple thing to recommend there. What I'd suggest is going to one of the webmaster forums and getting advice from peers, showing your URL, the keywords that you're targeting, the countries where you're active in, and trying to get some input from other people with regards to what you could be doing there. There's no one simple answer for everything. What's the impact of proper usability on SEO? This is kind of tricky and hard to say because there are lots of indirect effects here that are probably more visible. So on the one hand, usability sometimes makes it so that it's easier for search engines to understand the content on a page. And that's highly relevant. So if we can understand the content, then we can rank the content. On the other hand, usability is a very big indirect factor as well, and that if people go to your website and they can't use it at all because it's totally unusable, then it's very unlikely that they will recommend it to other people. So from those two aspects, I think it certainly makes sense to focus on usability. But you can also do lots of other things. I think it's hard to make a clear cut case to say improving this amount of usability, assuming you could even measure that, has this amount of effect on SEO because I don't think there is this clear one-to-one relationship there. But there are lots of kind of subtle connections that are definitely worth watching out for. What hreflangs should we use for two languages that I don't know? Haka and you, I don't know. So for hreflang, we use one of the ISO standard language definitions, language and region definitions. If those languages aren't covered there, then probably it will be tricky. It might also be that we don't explicitly recognize those languages in search anyway. So if you can't set your search results to that language, then probably it'll be hard for us to understand that these pages are really in this language. They kind of work around there is that we do understand words on a page. And if someone types exactly that word in into their search results, and we have matching pages that have those words on there, we can kind of map that and rank those pages anyway. But it sounds like something we should check out with the internationalization team here to see if there is anything we can do to make that better. Are the penalties the same for every single country? The website penalties? Because in the US, it's more complicated. Yeah. No, the policies are the same for pretty much every country and region. I think the differences that you sometimes see are that certain kind of spammy techniques are more popular in some areas than others. And that's kind of up to the webmasters who are running these websites. And not really something that we decided is more relevant in the individual countries. But the policies are definitely the same. Just around that, John, I also run slides out in India as well, where some of these techniques are quite normal. And so we follow everything on the guidelines, whereas everyone in the top five search is doing pop-ups and spammy backlinks and everything like that, which is completely against policy. So it's kind of an interesting statement before. Is there different policies in different countries? Because we do see allowances in certain areas that wouldn't happen somewhere else. I think it's hard to say because a lot of these things we do take into account for ranking automatically, and that happens across the board. And recognizing web spam for manual actions means that someone manually has to go through those sites and look at those. So it's not the case that we have different policies in different locations. It might just be that these happen to get through at some point. And that's something where your input through the spam report form is definitely valuable. But it's not the case that we say, oh, like everyone is spamming in this country. Therefore, we won't worry about spam. We won't flag other people's spam at all. We'll just let them all get away with it in this country because it's common. If it's against our policies, it's against our policies. It doesn't matter what location you're from. OK. When does Google look at the sitemap XML file? Only when it's added or by itself, we look at it when it's added. We look at it when you ping the sitemap file. So a lot of CMSs generate these automatically, and they ping these files to us. A ping is essentially a short request to Google servers saying, hey, this sitemap file has been updated. That's when we go and look at the sitemap file. And thirdly, we look at them regularly from time to time as well. So we don't know when you update your sitemap file, but we all kind of try every now and then. And of course, this is something that's done automatically, and it tries to adjust to what it has found over time. So if you put your sitemap file up once and you leave it in place for the next 10 years, then we'll probably look at it a bit more often in the beginning, and then at some point say, oh, this file hasn't changed for a couple of years. Maybe we don't need to look at it on a daily basis. Maybe once a week, once a month, it's kind of is the right frequency to double check. But that happens with any other type of file that you submit to Google as well. And again, you can tell us that things have changed with a ping, and we'll go and look at it right away. Hey, John, can I jump in quick with a question? Sure, thanks very much. First off, that's a very nice costume. I like that a lot. You're from the Beatles, right? That's the Beatles? No, it's Spock. So I have a client who has a manual penalty versus links, the old partial manual penalty versus links. And of course, they maintain that they didn't make the links. They don't know what links are, what are links, what are websites they don't know. They never touched it or did it. And so they're leery to go and do a disavow request because there's a chance, of course, they could disavow too many links and possibly hurt their rankings. So I just wanna ask, you haven't talked about it for a while, I just wanna ask what is the procedure for that? Should they just leave it there? Cause the damage really to rankings has already been done. Their rankings have already been adjusted. The links have already been adjusted based on what looked unnatural to Google. Should they just leave it in their search console? Does it leave any, does it put them at any risk just to leave the manual penalty there or do they have to go through the disavow process and risk more rankings? So it's a partial, yep, partial versus links. Yeah, so in that case, the manual action would be specific to those problematic links that they found there and those would already be kind of taken out. So I suspect the ranking wouldn't change if you got that resolved by removing those links or disavowing those links. So from that point of view, you could theoretically leave it like that. Some people like to clean things up and like to make sure that everything is clear and if you want to do that, that's obviously fine. You can do that with a disavow file. But if it's already taken into account, if it's a partial manual action, then that means that the WebSpam team has already tried to isolate exactly what these links are and to kind of take those out of the equation. Sometimes what happens there is that maybe someone else will do negative SEO or kind of try to promote something in the sense of kind of reputation management, something like that. And those are also situations where the WebSpam team will try to isolate those links and say, well, we want to make sure that these don't cause any problems and our rankings overall will isolate them and for the webmaster, they don't really need to do anything in a case like that. Okay, so again, there's not going to be any ranking issues. There'll be no problems if they leave that there and don't touch it and just continue doing their great stuff on their website. Yeah, yeah. I mean, I definitely document this and document kind of roughly what you think might be associated with that, just so that like a couple of years down the road, when someone else looks at search console manual actions, they're like, oh my God, there's a manual action here that there's something to kind of look back on and say, yes, Josh reviewed this and said, it was okay to leave it like this. You want me to put my name on it? I mean, you're looking at it. I don't know. Now we're both in trouble. So how do you mean, I don't understand actually, how do you mean I should make a mark there? Like, in a digital file or like a file or... No, no, just for your internal documentation. So that's a website owner has like, I don't know a document that they have somewhere saying like, well, we know there is this manual action here and we know what it's kind of from and that's okay. Okay. All right, thanks China for peace of mind, not as something that you have to bring to Google court. All right. Okay, thank you. All right, let me double check to see what kind of other things are here. With regards to AMP pages and crawl budget. So I think that's something that sometimes comes up. So we do try to pick up changes in AMP pages fairly quickly after crawling and those are things where Googlebot requests the AMP page because that's how we update the AMP cache. But usually on an overall basis for a website, that's something that at least evens out a little bit in the sense that users will be going to the CDN version of the AMP page and not to your website directly. So you'll see less traffic directly to the AMP pages and more traffic from kind of Google's cache updating system to update the cache there. So overall that shouldn't be an issue for a website. It might look weird in the logs if you're just looking at the raw logs, you know, like why is Google looking at this page so often and there are barely any visitors going to this page. But that's kind of the way the AMP set up works and that users go to the CDN version and you'll see that in your analytics logs if you have that set up and Googlebot will update the cache directly by looking at your site. All right. I need to head out a few minutes early but we can do a couple of questions from you all before we head out. Quick question, John. It's actually the same one on the last night's session on Twitter but I figured this might be of interest to other people as well. Regarding a business that has two independent offline locations and let's say the website does have a page for each individual location that describes the services, something like that, contact hours. Regarding structured data markup, how is it best to kind of mark up those two locations on the home page and you kind of use an array for both addresses or maybe you mark up each individual page? How would that go best? I don't know. That's why I don't have an answer for you on Twitter yet either. I don't know. So I suspect it's easiest if you have one landing page per location but sometimes you have multiple locations on the same landing page if you have multiple locations where business is active in and that works well. I think it kind of depends on what you want to get out of the structured data markup. If you're using it to supply information about opening hours or contact information, kind of depends there. But I don't have an absolute answer there. OK, well, if you get in touch with the team, let me know how that would best make sense. And also regarding that, you have that URL field in the local businesses, structured data usually. Does that work with hreflang? So if you have site in multiple language versions, you just need to put one of them and Google will kind of take care of that. I doubt it. So yeah. I'm actually not completely sure but I pretty much doubt that it would automatically swap out the language. And definitely not the country information because that would be specific to that location. I'm mainly interested in that reserve action markup that you can use, showing the knowledge graph you actually can get, book an appointment or something like that. I was curious whether you can do that for multiple languages if hreflang does play any role. I really don't know. Man, you keep asking these questions. Sorry about that. Questions. I don't know. If you start a thread for these things in the help form, I can definitely guide someone from the structured data team there to take a look. It might be that some of these types of markup are also only available in certain countries. So I don't know if that would be the case. So I can read it depending on where these websites are. Well, a local business markup, I guess, doesn't really show which snippet is mostly just to kind of connect name, address, phone information with the mind business contact information. So I don't see why it would be country specific. But yeah, it's Halloween so I have to ask tricky questions. OK, man. Unfair. OK, I'll definitely check with the structured data people on that. If you can start a thread with some examples and the specific markup that you're trying to do, and I can definitely check with them. Sure. Thanks. Well, unlike Mihai, I have a quick, easy question for you. Let's see if I can prove that. The related colon search function in Google, operator in Google, when it shows no websites for a website or even maybe shows, quote unquote, the wrong websites or websites that maybe the site owner doesn't think they should be related to, what's going on there? I don't know. Man. Don, I thought it'd be easy. I didn't like it in the wrong job today. I don't know. I'm not exactly sure what the related search operator actually pulls up that's so far back. I'm not really sure how it pulls things together there. So it's probably sometimes seen things as well, where does this connection come from? Overall, I really wouldn't worry about that because that's such a niche functionality. But if you have specific cases where you're seeing that something really weird is coming up, I can definitely pass that on to the search quality team. Usually, these aren't things that they would fix manually, but they would take these and say, well, we need to double check what the data is actually doing there. And they'll go and look and see, oh, this is some weird edge case and we can't fix it. It's like, OK, fine. It might also be something that they can't fix. Fair enough. Let me ask you a different way than get to the point of probably what is the worry is. So sometimes people look at that and they worry that because nothing shows up for their related colon search, that therefore Google doesn't understand their entity well enough or they're not important enough, or they've got some kind of filter or something along those lines. So you're saying this is probably even written before Hummingbird was updated in June 2000. It's probably working on LSI, if anything. And it's just probably a threshold where, yeah, you've got enough. You're important enough for us to do this or no, you're too new. We haven't bothered to do it yet. Is that something like that, maybe? Yeah. So I think that's generally something I've heard from a bunch of people where you start in the website and you want to be sure that Google doesn't just pick up the words but actually understands what your site is about. Yeah. And at the moment, I'm not aware of anything that really provides this kind of information. And I think it would be useful. So if we had a simpler way of kind of getting that information, that's probably something we should try to figure out how to do. OK, cool. Thanks a lot. All right. So with that, I need to head out. It's been great talking to you. Thanks for all of the questions. Sorry I couldn't get to answer all of them. If you send me some links to forum threads, I can try to guide the right people to those and get you some answers to things that maybe aren't completely clear yet. All right. So thanks again. And hopefully, I'll see you all again Friday, maybe, I think. The next hangout is on Friday. See you then. Thanks for your time. Bye, everyone. Happy Halloween, John. Thanks, you too.