 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Webmaster Office Hours Hangouts, where webmasters and publishers and bloggers and SEOs can kind of join in and ask us questions directly all about search. So there are a bunch of questions that were submitted already. But as always, if you're kind of new to these Hangouts and want to get your question in from the start, feel free to jump on in now. Hey, John. Sorry, is somebody else talking? Just a really quick question. I'm going to get out of here quickly. I'm looking into Googlebot's ability to render JavaScript at the minute. And so we've tested this plenty of times. And Googlebot isn't really able to execute any JavaScript events, like on click or on scroll. So there's quite a lot of writing on certain websites, depending on the fact that Googlebot can't execute these events so far. Do you have any plans to start executing on scroll, on click, any kind of events? What particularly are you looking at? So for example, I mean, one basic configuration of infinite scroll would mean that when the user gets that page and you have one URL, if Googlebot is able to execute on scroll, then they would essentially see the content of two articles on one URL. The relevancy of the article won't make sense. And so one of the safeguards for infinite scroll is making sure that the extra content loads on scrolls Googlebot only sees one URL with that URL's content. I think we do scroll a little bit, though. So that's something that I think we've been doing for a while. So back when I did that blog post on infinite scroll, how to set that up, I checked some of the pages, how they were indexed. And I think there was like one or two page views additionally that was indexing per page. So some of that scrolling is already happening. I think that's at least in the plans that we have some amount of scrolling down to make sure that there is nothing that we'd otherwise missed there. So if you need to prevent that content from actually being loaded, what you could potentially do is put the script that's executed with the on scroll behind a robots.txt block or the server response that sends back the content for that on scroll section and block that by robots.txt. So that's legal. I mean, another use case would be potentially imagine we were designing filters and say we're a real estate website and we have a bunch of obscure filters like pet friendly and being able to combine them. So one of the things you might want to do is essentially on click, you may want to have a function execute, return false to the pages and reload, but essentially send back a response from the server and then have additional listings appear or be filtered. So are you saying that if we have in the event, if we have a function that is in an external script and blocked by robots.txt, that's the safest way to do it? Probably. So I think with on click, it's a little bit different in that we don't click on all of the elements. But what does happen is that we look at the JavaScript files that we load and we try to see if there are any URL-like strings in there. And if we find something there that looks like a URL, we might try to crawl that to see if we're missing anything. So if you have an on click event that just pulls in additional content with a server response type thing, then probably we wouldn't do anything with that. But again, if you really want to be sure that we don't actually process that, putting it behind a robots.txt block is probably the best approach there. OK, so just last follow-up question. So I understand what you're saying. And I didn't know that that was actually allowed. I thought Google Search Console would probably come up with errors. So another implementation would be to, if you have this property real estate page, essentially on first response, I send back all the listings and you have that page rendered, the other thing potentially we could do is have all the additional information loaded in a JavaScript object. And from there, we could just load it in that way that we're going to be any URL's accessible, unless it's something we needed indexable and then we could have a fallback age rev. Would that be OK? That would probably work too, you know. Sweet. OK, thank you very much for answering. I got a code for that. That's awesome. Thanks, Joe. Bye. OK, any more questions from any of you who aren't here regularly? We'll jump on in with the questions that were submitted. And as always, if you have any comments or further questions, clarifications that you need for some of these questions or answers, feel free to let me know. And we can go look at that. All right, let's see. Where do we start? So one question submitted by Mary Haynes around Padmapper, which has a weird issue in that they drop out of all search results for some specific types of queries that are limited to specific locations and kind of type of result or type of content that they're looking at there. I looked into that a little bit. But as far as I can tell, this is essentially ranking as it would algorithmically. So I'm double checking with some folks here just to make sure that I'm not missing anything. But from the general setup, from what I can see overall, this kind of looks like we're handling it the way that we would normally do it. There is definitely nothing manually blocking the site there. So don't really have any insightful comments to add around that. One keyword, for example, bus ticket online and online bus tickets, Google AdWords shows search volume for only one keyword in such case. Should we expect that the search result in Google would be the same for these keywords? If not, which one is the most relevant? So in general, the AdWords keyword tools do look at things like search volume as well, but they're optimized for different things. So that's not a one to one view of exactly what happens in search. So if AdWords keyword tools does one thing with regards to folding things together or not, that doesn't necessarily mean that it'll be the same in search. So this is the type of thing where I would just try it out yourself and see what happens. If you're kind of asking, are the results the same for these keywords? And it's something you can just plug those keywords in and see what actually happens. Let's see. Then there's a question from David with regards to the rendering. We looked at that. Does Googlebot make a head request or is there only a get request? So a head request is essentially just the header of a page and a get request would mean the whole page. I don't actually know for sure. I know back in the day, we used to do some head requests to kind of see if the page has actually changed or not. I believe in the meantime, we pretty much only do get requests for normal crawling and indexing. Obviously, if you have JavaScript on a page and that does some kind of special request with the JavaScript side, then we'll try to process that and try to pick that up as well. So it doesn't mean that you'll never see a head request in your server logs or that you'll never see a kind of a post request in your server logs. But probably they're a lot more rare than the normal get request where we would fetch a normal URL the way that would happen if you entered that URL in your browser and hit Enter. All right, then there's a question about a domain. We have one domain with several country domains, like FR and IT subdomains, for example, and several vertical categories for each country. In the French market, we'd like to change the name of the category and redirect all URLs from the previous to the latter for a total of 1,000 redirects. I think it kind of goes into like we think this makes sense because more people are searching for that one kind of new name than the old one. So we'd like to kind of have that covered. Would this kind of operation have any effect on search rankings considering that the old one has been around for years and has collected a decent amount of authority over time? So in general, these type of site migrations or restructurings of sites are things that kind of happen over time. And a lot of websites do that. Changing subdirectory names is something that can happen over time. Subdomains can change. You might even want to change your domain name if you wanted to. And handling redirects is something that our systems are generally pretty good at. So regardless of the number of URLs that you're redirecting there, that's something where if you set up these type of redirects properly, you'll generally see that it settles down in the new state. However, when you're doing this type of internal restructuring where you change a significant number of URLs within a website, that always means we have to kind of reprocess the whole website to understand again what actually changed on the website. So that's something where probably you'll see a period of time with some fluctuations, maybe not as high visibility as you had before, just until everything kind of settles down and returns to a kind of reasonable state where we understand exactly these URLs are related to the old ones and the rest of the website is here and they're kind of tied in together as well. So this type of restructuring, especially if it's for a significant number of URLs, my recommendation would be to make sure that you do it technically correct. So really make sure that you have everything prepared, that you have lists of the URLs, that you follow the Help Center advice on kind of site moves. And then secondly, I try to do this during a time where you're not dependent on search traffic for that part of your website. So if you have kind of like a low season where traditionally people don't search that much, that might be an option. If you're planning a bigger marketing campaign where you're driving traffic to your website in other ways in search, that might be an option as well. So that's kind of what I'd recommend there. I don't think you'll get around not having any fluctuations when you do this type of internal restructuring, but at least by making sure that technically everything's set up correctly, you can kind of limit the time effect that would be there. All right, I have a client who has, in his link profile, do follow directory links from around 15 domains that link with branded anchor text. Should I be worried about these? Can they harm my client's ranking? Is any algorithm discounting them already? Is it a sign that it's a low quality site, perhaps? So just because they're from directories and they're branded doesn't necessarily mean that they're good or that they're bad. I'd still kind of look into what exactly is happening there, where they're coming from, what was involved with those links, and then make a judgment call whether this is normal, natural links, or if these are problematic links. And also, given the number, when you say 15 domains, I assume a website that's been around for a longer time probably has a lot more links. So if these are just 15 links that you found, which you're not sure about, then probably that's not really something that you really need to worry about. So those are kind of my thoughts around this general problem there. In practice, if you think that these are unnatural links or that they might be unnatural links, disavowing them is fine. We will take them out of our calculations. And that might be one approach. But just because they're from a directory doesn't necessarily mean that they're bad links. All right, now we have the question, subdomains versus subdirectories. I think we have this question pretty much every hangout. And given that it keeps coming up all the time, my kind of rough feeling is that even around the rest of the web, it's kind of like we don't really know. And maybe there isn't a clear preference one way or the other. So my recommendation there would be kind of to look at your website and think about what makes sense for your specific case. From our side, we kind of treat subdomains and subdirectories in the same way. So that's something where I would look at more what makes sense in your specific case. We're launching a localized website for Israel but can only serve Hebrew language on the blog, which is a different platform than the retail site. The retail site will be in English. Any issues from an SEO perspective in having two languages on the same domain? Will that confuse crawlers? Will that be problematic for our rankings? In general, this is totally unproblematic. We look at language on a per page basis. So if you have some pages that are in Hebrew, some pages that are in English, that's perfectly fine. We look at them individually. So from that point of view, I don't really see a problem here. With regards to the retail site, you just mentioned that it'll be in English. But I am guessing from the context, it'll be in English and Hebrew. And if that's the case, then I just make sure you have the hreflang links between the equivalent pages on the retail site. And if on the blog, you only have one language, that's perfectly fine. There's no need to do anything with a hreflang. I have a question regarding flexible sampling. So flexible sampling, for those of you that didn't see this, is based on the old first click free setup that we used to have. Only it's more flexible in that you can define yourself as a publisher how often you want your content available for free, and you can do that in a flexible way and dynamically choose even between different types of users and showing the paywalled content or the visible content. All right, so the question goes on on a website with paywalled content. We want to implement the class name and structured data so that Google can differentiate our paywalled content from the practice of cloaking. The problem is that we don't want our content to be found in our source code. Since it's an Ajax-based website and we still use escape fragment URLs, I wanted to ask if it's OK to only deliver a schema markup and the class names in the pre-rendered HTML for Googlebot while the normal URL still deliver the same code. So I think there are kind of two different general questions here. On the one hand, if you implement flexible sampling, then obviously you'll be serving the full content to Google and you might not be serving that full content to users. So that difference between showing one version of the content to Google and one version to users I think is kind of natural with flexible sampling. So from that point of view, that's kind of fine. The other part that I'm a little bit more worried about is it sounds like you're using the Ajax crawling setup with the escape fragment URLs. And from our point of view, that's kind of a deprecated setup. And I would expect that at some point we would be turning off crawling of the escape fragment URLs. And we would only try to crawl and render the hash paying URLs themselves, which are kind of these JavaScript pages. So if you're only serving the structured data on the escape fragment version of the page, then I suspect that will kind of make things a bit more complicated in your specific setup. So that's one thing I'd kind of keep in mind that that can happen at some point. I believe we stopped recommending the Ajax crawling setup maybe two years ago. So that's something where I expect us at some point to switch to the rendered version. And given that you're thinking about setting up a new kind of structured data on these pages and you only want to serve that structured data on the escape fragment URLs, then probably you need to kind of rethink your general website setup there to see what you could do to serve the structured data within your rendered view, for example, or what it would take in an extreme case to move your website from the hash paying setup to something that uses more traditional URL setup. So I think your question is kind of starting off with this kind of simple like, where should I put my structured data on my page? And you're hearing an answer. It's like, well, you need to redesign your website first. And I don't really have a simple approach there. So that's definitely something where you're probably going to have to think a bit more about your overall website strategy there before you come to a clean answer with regards to the flexible sampling setup. About two years ago, we moved a complex site about 200,000 pages from domain one to domain two. And now we'd like to upgrade to HTTPS. My question, should we update the 301 redirects from the old domain directly to HTTPS or leave it alone, thus having two hops? I think if you move two years ago, then you can just leave it alone. So we probably don't crawl your old URLs that often anymore anyway, so it's not that critical for us that we would have it all in one hop. So in general, two hops themselves would also be unproblematic. It gets a bit trickier when you have, I think, five hops or more, where we can't follow those redirects right away when we crawl. So with two hops, that's something we can still follow normally when it gets to a lot more redirects in a row. Then what happens there is we follow the first batch of redirects. And then the next time we consider crawling this URL, we'll follow the next batch of redirects, which means it takes a lot longer for us to find that final URL from the initial URL. But given that you've kind of done this site move already a long time ago, and it's just two hops instead of one hop, I really wouldn't worry about that. In the crawl error report under Not Found, we can quickly rack up hundreds of thousands of 404 errors. This is due to our servers going under rolling maintenance from 1 AM until 2 AM. During this brief window, Googlebot will get a lot of 404s. Is there any way to limit the time of day Googlebot crawls a website? And is there a way to quickly remove over 1,000 Not Found errors in the reports at a time, something like a Clear All button? So starting with end, you don't need to remove these errors in your report because removing them with the mark this fixed button in Search Console essentially just hides them in the UI. It doesn't change anything on our back end. So this is really something that you'd only be doing to make the report look clean, even though we know of all of these errors. So that's probably something you can skip. The other thing to limit crawling during a time of day, that's something that we don't support with Googlebot. And finally, what I would recommend doing in a case like this is making sure that you're serving 503 as an HTTP result code instead of 404 in this kind of a setup. So 503 is a temporary, unavailable code, which tells Googlebot that your website probably is here, but is currently not available, so maybe Googlebot should come back a little bit later. And doing that during a time like this is perfectly fine. A 404, on the other hand, tells us that this page that you tried to crawl actually doesn't exist. And I imagine what will happen there is that we'll run across all of these 404 errors. We'll drop a bunch of pages from our index based on these 404s. We'll pick them up again the next time we can crawl. And you'll see this fluctuation of URLs being indexed or not indexed. And when a URL can't be found in search, you don't really know. Is it because it was a 404 last time, or is it because there's some other problem on my website? So for this type of issue, I'd really recommend the 503 result code. How important is fetch and render in Search Console? It reads our full home page code, but the actual image in render view looks strange. Does this mean Googlebot can't read our entire home page? Or is there some other code that looks at the rest? So fetch and render is actually pretty direct with regards to what Googlebot actually picks up on a page. However, it's just a screenshot view of the page. So if there's something that's below the visible part on the page with fetch and render, you don't necessarily see that. So that's something that might be playing a role there with regards to making it easier to understand what is actually renderable by Googlebot. My guess at this general question is that the image is probably distorted on the page due to something in the CSS. So when Googlebot renders a page, it renders it with a fairly high viewport, which is much higher than a normal laptop browser window or something like that. So if your page, when it's rendered with a really high viewport, has images that are stretched that far, that might be something that makes it hard for us to actually render the view in Search Console because we stretch this image. And if the text is below the image, then that's something that kind of gets dropped in that preview window. We still use that for indexing, but we have trouble showing that to you in the screenshot view. So what you might want to do is double check your pages with a really high viewport. In Chrome, you can kind of set it up and emulate it with a responsive view where you can say, I want a page that's this wide and this high of viewport, and it'll zoom out to show the whole page. And you can kind of see what might be the problem there. And you can mess with the CSS settings to kind of limit the size of your images so that they don't get stretched for the full viewport size. And that will probably make it easier for you to debug these things with the Fetch and Render tool. It's not critical for rendering, for indexing our pages, but it does make it a little bit easier for you to kind of double check what is actually happening. All right, large websites. If you see their home page is a bunch of internal links, and they are ranking for generic queries for users, both cases are fine, whether you add content or not, because the user knows your brand and wants to go to a page, which he thinks is related to his requirement. In such case, for Search Engine, does it give any benefit to the website that it has a brief introduction of it or not? So I'm not really completely sure what you mean here, but in general, it definitely makes sense to be kind of direct on your website about what the value of your pages is and what you provide for users. So this is a really common kind of problem that we run across when we look at sites for site clinics at conferences, for example, is that it'll look like a really nice site. It's really well-designed from a graphical point of view. But when you open the page, you don't really know what this business or what this website is trying to show you. Is it trying to sell a service? Or is it selling a product? Or what are they trying to achieve with their home page? It might look really nice and fancy, but it's kind of unclear from the content on the page what it actually is that they're trying to achieve. And with that, you kind of lose users that they go to your page and they essentially get lost. I'm like, what do I do here? Whereas if you had clear calls to action, if you had clear information about what service you provide that's unique and valuable to users, then that makes it a lot easier for users. And it definitely makes it a lot easier for search engines because they have ticks that they can look at and say, oh, this website sells this service. And we can take that and put it into a snippet. We can take that and use it for understanding the relevance of this page when people search for something around that kind of topic. John, regarding that, I just wanted to say that in OTA, online travel agencies, we have seen this kind of thing too much, that their home page is ranking for a highly search queries. But if I go to that page, there is only one booking widget and all the internal link pages, all the links to internal pages. And they are just ranking for highest queries. So in such case, I was just trying to understand if adding some content really competes with them or it doesn't matter because search engine is ranking those pages. So you mean like someone else, if someone else were to create a page with more content. I think that's always tricky because you're looking at different factors. So this is something that applies to this specific case where you have one really strong website and it doesn't really have that much good content. And you have another website that's maybe not as strong, but it has a lot of good content. And our algorithms try to understand what is the balance for each specific query. And we try to figure out where we need to draw the line and say, well, this amount of good website kind of matches this amount of good content. And because of that, we should rank this one here and the other one below it. So that's something where there is no absolute answer where I can just say, well, 2,000 words on our page match five external links or something like that. It's not something that really you can kind of compare. All right, let's see. One website I think we talked about on Twitter already. In a previous chat, it was mentioned that at least one of the algorithms looks for excessive use of money keywords and alt tags to decide if they're spammy or not. So assuming the webmaster uses normal sentence syntax, it's a quality website that gives value. Alt tags are being used for their intended purpose. Would using articles, conjunctions, and prepositions be a good way to avoid getting danged or getting slightly more points? So to me, this sounds like a really weird situation where you're kind of asking, would using normal text on a page actually be useful? And of course, using normal text on a page is useful. So writing naturally, writing in a way that works for your users is something that we do try to pick up when we look at pages. So obviously using articles, conjunctions, and prepositions in the way that makes sense for that language is the right thing to do. You don't just throw a bunch of keywords on your page and assume that search engines will pick that up, and users won't mind that they can't actually read any of the content. So of course, you should write in a natural way on a page. We migrated to HTTPS 11 days ago, and only 20% of our submitted site map URLs have been indexed so far. Does this rate indicate any crawl issues, or would it be expected? I don't actually know what percentage would be acceptable for individual websites. It really depends a lot on the website itself. So this is not something where there is like one count or percentage that we can say, like after 10 days, 20%, that's something that really depends on the website itself. It depends on the structure of the website, kind of how often we recrawl the website, what kind of URLs you have on your website. So for example, if this is a news website and you have news articles that go back into, I don't know, 1800s, then you have a lot of content on your website that from our point of view probably doesn't make a lot of sense to recrawl every day. So maybe we'll look at this content, I don't know, once every couple of months, maybe once a year. So in a case like that, we wouldn't go off and recrawl everything, all of that old stuff, right away just to make sure that your percentages for the submitted site map URLs is high. So that's something, it really depends on your website. It really depends on the type of content you have, the type of pages they have on your website, the crawling that we do on your website, all of these things kind of come together. My general feeling is that this might be completely natural in a case like this. So what I would do, actually, if you're looking at the submitted URLs for per site map file is to split that up into smaller site map files so that you can kind of judge that a little bit better. And also, what I'd recommend doing is just looking at the overall traffic to your website. If people are still reaching your website and they're just going to the HTTP version and getting the redirect to the HTTPS version, then from a practical point of view, they're still on your website. They're still getting the HTTPS version in the end. And it's not really something that's critical that you need to kind of fix the re-indexing of your pages immediately. Sidebar, header and footer links are the same for Google or different. Also, sidebar changes per template means if it's a hotel template and then a sidebar, the hotel links will repeat, but it will change if there's a flight template and there'll be flight links. That's perfectly fine. So we do try to understand the primary content of the page. And that means we try to understand where kind of the header, the footer, the sidebars are so that we can focus more on the primary content of a page. And if you have different parts of your website that have different content in the sidebar sections, that's perfectly fine. In general, we're still able to figure out where the sidebar is, even if the content within the sidebar changes. So that's something, for example, that we see a lot on news websites in that they'll list the recent articles in the sidebar. And for us, it's like pretty much every day when we look at this website, we have different content in the sidebar. But that doesn't mean that this content is a part of the primary content of that page. So if you think it makes sense for your users to have different sidebars and different situations on your website, go for it. I think that's a great idea. So Googlebot considered them the same, like header links or footer links and sidebar link. Do sidebar link are changing template to template? We pretty much see them the same. I'm reluctant to say that they're exactly the same. But we do try to separate between the primary content and the boilerplate, as we call it, like everything around it. And I don't think we differentiate between where within the boilerplate this particular link or content is. Our Contact Us page comes above the home page for branded search queries. This is a sign of an algorithmic penalty. Last summer, the site was spammed with thousands of weird pages, but the developer cleaned up and the site never received a message in Search Console. I don't know. Sometimes this can happen. It's pretty rare. But in general, we can recognize these kind of shared pieces of content across a website and treat them appropriately. My tip there might be to post in the Webmaster Help Forum and double check with some peers and make sure that things are set up properly. I think when you say your site was spammed with thousands of weird pages and the developer cleaned it up, it sounds like perhaps your home page was hacked or your website was hacked and someone placed a lot of these spammy pages on your website. And if you've really cleaned that up, then that shouldn't be a problem now anymore. So that's something that if this content is no longer on your website, we wouldn't be treating it as such. But you might want to just double check to make sure that there's actually no further hack or no remaining pieces of that hack available on your page just to get that ease of mind that you've actually cleaned everything up. And that's something that often people in the Help Forum, if you give them this additional information, they can double check with other tools and with other tricks to see, is there something weird happening when Googlebot looks at your page like this or when it looks at it with a mobile version or whatever? We have extensive reviews for a large number of small businesses. Is there some kind of criteria for being featured in this section of the local business listings? Any special markup? I don't see the image associated there. So it's kind of hard to say. But in general, we do have all of the markup in our developer site. And I double check to make sure that you have all of that set up appropriately so that we can kind of pick that up there. I probably also double check in the Webmaster Help Forum to make sure that you're kind of aligned with the structured data guidelines that we have. And there are a bunch of people in the Help Forum who have a lot of experience with structured data who can give you a little bit of tips with regards to how you could implement this on your pages to make it easier to pick up or depending on how it's actually meant to be used if it's on the other people's websites or whatever. They can give you some tips on what to do there. Is the algorithm the same for the top 10 and for 11 to 100? I see websites jump directly from 100 to 20. But those that are in the top 10 take up too much time to jump to fifth or fourth. So I wanted to know if the algorithm that runs is on the same level or not. Yes, it's the same algorithm, essentially, that does all of this ranking. And usually what happens is that there's a lot of competition for the head of some of these queries. And that makes it a little bit harder to jump around in the top of the search results. Whereas on the lower part of the search results, things can shuffle around quite a bit. And that's kind of natural because there's just a lot less competition in those places. I want to add some products to two, maybe three, subcategories when relevant. My CMS allows me to do that without changing the URL. Would those avoid potential duplicate content issues? So even if you have separate URLs and you put the same content into two or three subcategories, that's perfectly fine from our point of view. We would not take manual action on a website like this. We would not demote a website for having the same content in multiple categories. That's a really common e-commerce setup. And doing that in a perfectly clean way that you really have just one URL for one product is sometimes surprisingly hard. So that's something we've learned to live with and something our algorithms have to deal with on a daily basis. So that's not something I particularly worry about. If you can do it by putting it all onto one URL, then I'd go for that. But if you really can't do it and you think having this product in multiple categories is really useful for users, then don't let the search side hold you back on this. Doesn't matter to change the anchor text of internal links in the website or a single anchor text on all web pages on the same website. It's also OK, which is better from an SEO point of view. So this really depends on your website. In practice, it's something where the anchor text just naturally varies across a website, which is what we tend to pick up on as well. So the anchor text does give us a little bit of context about the page that it's linking to. So having different anchor text leading to that page makes it a little bit easier to understand that this page is relevant for these type of things. But on the other hand, we also have a lot of content on the page itself to understand how it's relevant with regards to those keywords. So I wouldn't sweat it if you can't change the internal anchor text. If you can do it within normal articles on your website, then why not go for that? But I try to just do it the way that it naturally works within your website. I want to optimize the content of about 2 third of a website. Does it make any difference if I upload the changes in one day or do it continuously over several days? If you're changing the content of a website, my guess is you're probably also changing the internal linking between these pages. So if you're changing the template, if you're changing the textual content on a website, then often that also involves changing the internal linking. And these are things that sometimes take a bit of time for us to reprocess. But in practice, it probably doesn't make much difference if you just upload it all at once or if you do it kind of in a continuous way. So if you upload it all at once, obviously it'll be a bit of regards to fluctuations. If you upload it step by step, those fluctuations will just be smaller, but they'll be spread out for a longer period of time. So you're kind of trading off less strong fluctuations for a longer period of time versus stronger fluctuations for a shorter period of time. And which one works best for your website is kind of a call that you have to make on your site. Reconsideration request was approved twice, but manual action still shows. What's the case here? I don't know. I'd have to double check with the web spam team on that. I don't know if you have a thread in the Help Forum. Often, the people there are able to escalate these type of issues as well, so they might be able to kind of help you clean out the last things. One thing that, since you mentioned widgets, what might be happening here is that the manual action only affects a part of the links to your site. And in a case like that, what might be happening is that the web spam team is kind of like saying, well, you cleaned up all of these, but some of these are still here, so we'll just keep that in place to avoid that causing more problems for your website. Kind of like a disavow on our side in that we treat these links and don't pass the signals for you. So that's something that might be happening in a case like this. And in general, that's not something you really need to worry about, because if we're disavowing those links for you already, then that would kind of be the same as if you remove those links. So from that point of view, probably that's less problematic. But I definitely check in the Webmaster Help Forum with the folks there, kind of copy and paste the email that you received, and maybe show a screenshot of what you're currently seeing there so that someone can take a look at that and say, this is normal, or this is problematic, and you need to do this to get that resolved, or maybe they need to send it to someone from the web spam team to kind of have a deeper look at it as well. A markup question, what if there's a rating on a product page when the product is not yet released? So I guess, for example, you have a product page for the Pixel 7 or something completely new that's definitely not released yet, and people are already dropping reviews there. From our point of view, that's not something that we specifically differentiate for. So I don't believe we have any algorithms that try to figure out, is this product actually available, and could this person reasonably have left a review on that? So I assume our algorithms are generally trying to collect this information and hope that it will settle down over time. So as this product ends up being released, then obviously more reviews will show up that are based on actual people using the product or actual buyers using the product. There might be legitimate reasons where people would be able to review a product before it's officially released. That could be if you have releases globally, in some places it's released early, in some places it's released later, then you might have someone in one of those late regions reviewing a product that they bought maybe through someone else so that they picked up when they visited the other location and they're able to review it because of that. So that's something where I don't think we have really hard spam guidelines saying you're not allowed to review products that are not broadly available yet. I noticed within the last few months that podcast RSS feeds are now showing up in Google search results. Can you address why this change was made? Would there be any SEO benefit by having an RSS feed hosted on my own domain? My concern is having my RSS feed on the domain of another company is providing them some SEO value rather than kind of linking to my own site. So I don't know exactly what you're referring to, but I believe for quite some time we've been showing podcasts in the search results. And we do pick that up based on the feeds that people submit for their own podcasts. And we try to show that in search. And from what I understand, this is something that we purely do for usability reasons to make it easier for people to actually go to your podcast and discover the content that you have there. It's not something where we'd say, well, this podcast is hosted here, therefore we'll give this website that's associated with some additional SEO boost. So just doing that for the SEO boost, I don't think would make sense. I'd really try to limit it to what makes sense for your users. And if people love your podcast, then make it easy for them to get to your website and to recommend that further to other people. What cache systems can be used to optimize the crawling of AJAX crawls on AngularJS? Wow, that's a surprisingly complicated question. I don't know what we would recommend in a case like that. In general, what I would do if you have a website that's built on Angular or any of the other JavaScript-based frameworks is make sure that it actually renders well. So use the fetch and render tool in Search Console to make sure that we can actually render your full content. Sometimes there are small things that you can tweak and make sure that that page actually loads on there. We have, I think, a handful of pages in our developer documentation around debugging JavaScript-based frameworks and around supported functionality from JavaScript-based frameworks. So for example, if you rely on a service worker to function in order for a page to load, then that might be problematic because Googlebot doesn't actually have a service worker that it would run for your website. So these are the type of things you might want to double check. And with regards to the specific caching systems, I don't actually know which ones would work there. If you have examples of websites using specific setups, you can test these using, for example, a mobile-friendly test, which also renders a page. It uses the mobile view of a page. But if it's a responsive design, that should be kind of equivalent to the normal view. You can try it out like that. You can also join the JavaScript working group that we set up a while back where you can ask us specifically around individual URLs, and we can double check to see what is kind of happening there in the back end. Wow, still so many questions. Where do we start? Algorithmic penalties that don't show up in Search Console, a site that we managed doesn't have and never had any manual actions. Yet the rankings are highly depressed. So I think we do a lot of algorithmic changes, and we have a lot of algorithms that affect how a site is crawled index and ranked in the end. So usually, we would see this more as trying to understand the relevance of a page rather than as a penalty. So specifically with regards to links, which you mentioned in the question, that's something we try to recognize these problematic links and filter them out on our side so that you don't have to worry about these kind of issues. If you are seeing your site kind of dropping significantly in invisibility in the search results and there's no manual action and there's no kind of technical reason that you can find behind that, then I would assume that one of our quality algorithms is looking at your site and thinking, maybe this wasn't as relevant for some of these queries as I thought. And then it would probably make sense to kind of take a step back and think about what you can do to significantly increase the quality of your website there. All right, it looks like we just have a couple of questions left. Before we head out, what else is on your mind? Is there anything specific that I need to cover? Everyone's so quiet today. We can talk about the mobile stuff going on. Is there anything? The mobile stuff. A lot of these tracking tools that you do or do not like have been showing a lot of changes specifically on the mobile side. I assume some of them are picking up on the mobile first index tests. Is that really close to launching, you think? So close to launching. I don't know if they're picking up on the mobile indexing testing stuff there, because that's probably a fairly small amount of testing that's happening there. And I don't know specifically what these tools are kind of looking at at the moment. So I wouldn't be surprised if there are other algorithms that we have which are specific to the mobile side. I think that's definitely a big focus area for us, because a lot of people are using mobile devices to search. So having algorithms that show different results on mobile, I think, would be kind of expected. So from that point of view, I don't think that just because you see these in the mobile search results, assuming these trackers are looking at normal search results, which I think they have a lot of experience doing, so probably they are, then I assume these are just normal changes that we have in the search results normally. With regards to the mobile first indexing stuff, that is something where we're still experimenting with. And the goal is kind of to recognize when sites are ready and to switch them over step by step as we see that they're ready. So that's probably not something that would just go from one day to the next, but rather be more kind of a gradual thing. But that hasn't started yet, that first batch of sites. Not anything official that we're announcing yet. So definitely the team is looking into the classifiers and double checking to make sure that they're picking things up properly. And I believe Gary is working on a blog post. So once we have all of that lined up, we'll definitely let you all know. OK, so you didn't say that you didn't start it. You kind of may have said you did start it. I'm looking at classifiers. OK. Well, I mean, we test with a lot of things. So I'd see that as something where we definitely try things out over time. And we make these experiments all the time. So it's not something where I'd say it's like, well, like someone flipped a big switch and everything is rolling over to mobile first indexing now. I don't think that has happened yet. Right, you said that before. You're rolling them out in batches as sites are ready. I asked you, have any sites been rolled out in that first or sub-batch, like first set of batches yet? The answer is yes or no, or you can't comment. I don't know. So it's something where we're definitely trying things out, but it's not that we have kind of an official batch that we've sent out and switched over. So it's possible that for individual sites, we're kind of already indexing the mobile version. But it's probably like a really small number. So I wouldn't see that as saying, like, we've started with this, but it's more kind of still in the experimental stage. OK, thank you. All right, so with that, we managed to get the hour through. Bunch of questions left over. I have the next hangout set up, I think, for Friday. So if there's anything that I missed that you need to have more clarification on, feel free to drop that into the Friday list. There's also the Google Form where you can submit questions for other video answers that we're working on. So that's another option there. And finally, of course, the Webmaster Help Forum, if there's something where you need to kind of discuss with other people in a more frank and direct way to get advice on what you could be doing with your website. So I hope those channels kind of help you get through the rest of the week, at least until Friday. All right, with that, thank you all for joining. Thanks for all of the questions, and see you maybe next time. Thank you. Bye. Thank you, John. Bye, everyone.