 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts with a bunch of interesting and interested webmasters and publishers in the audience here live and maybe some people watching this on YouTube as well. As always, if there are new people here in the Hangout that want to get a first question in, feel free to jump on in now. Looks like there's also still some room left, so time to find that link on Google Plus and join us as well if you want. I guess if nobody wants to ask a question, I just have a quick question about SSL, HTTPS. Does the algorithm consider any value price, I guess, certificate as the same as far as any ranking effect as far as if you use a free SSL certificate or a very low cost one versus a high cost? Yeah, those are essentially all the same for us. So if it's a valid certificate that's accepted by a modern browser, then you should be all set. So any of those certificates essentially do the technical work that an HTTPS certificate has to do. So if it's accepted by a modern browser, that usually means that it's trusted. So we can rely on that. I thought even if it's not valid, it's still, as long as it's HTTPS in front of the URL, it's still part of the ranking boost. Yeah, if we index it like that. So usually if we discover an HTTPS URL and it doesn't have a valid certificate and we know of the HTTP version, then probably we'll stick to the HTTP version that we know is OK. Obviously, it's trickier if you're redirecting your HTTP version to an HTTPS version that doesn't have a valid certificate than you're kind of, I don't know, messing things up anyway because users will see a warning when they try to open your page. So it's kind of a tricky situation anyway. So I think regardless of what search would show, that's something you'd need to fix or you'd probably be tempted to fix quickly anyway because it's so visible. So basically as long as you have a valid and not self-signed certificate, everything else should be fine. Exactly. OK, thank you. John, I noticed that on Twitter, you guys basically show the HRF-lang results, like Canada, France, and so on. Like, I mean, is there a reason why? It's not just Twitter.com, like whatever slash, like let's say you register Twitter.com slash ABC, whatever. And the HRF-lang shows English, whatever country you'll be in, the HRF-lang will show the whole entire URL. Is there a reason why you guys are doing that? It's only for Twitter? I'm trying to figure out what's specifically mean. But I think it's like the parameter at the end where it changes the level. Yes, yes, the parameter at the end, yeah, sorry. OK, well, that's essentially what HRF-lang is supposed to do. So we know these are equivalent URLs. And if you say this is the German version or the French version or the Italian version, then we'll try to show that version to users. In the case of user-generated content, it's obviously a bit tricky because on the one hand, there is a language of the content that people submit. On the other hand, there is the language of the whole user interface around it. So that's something where it's hard to kind of say you should be doing it like this or you should be doing it like that. But if you already have the user interface in those languages, then know about that because these are equivalent URLs. Or we should be able to pick that up for HRF-lang or for showing the right version in the search results anyway. So it's not specific to Twitter. OK, it's just interesting that for other brands, it's shows and then for Google, it doesn't. I think it's probably tricky with something like Twitter where there's just so much content generated all the time that it might be hard to kind of keep up and say, OK, we're going to crawl all of Twitter and in so many different languages. I've got three questions that are bugging me. I don't know if you want me to ask some of them now or none of them ever. I don't know. Was that one? No, it'll be number four, I guess. OK, I'll start with the hard one and then you can tell me if you want me to keep going. OK. I'll be fast. So I'm sure you saw this blog post from a few weeks ago where this guy named Mike from Google Canada, from the Google Brain team in Google Toronto. Basically confirmed on some level Google search using PogoStickin, click data, and so forth directly in their search rankings. I got the audio transcript, I published it, I emailed Google three or four, I think five times now, what's going on. Nobody got back to me. I'm sure you're aware of this. I'm sure you're on that chain with you and Gary and so forth. Can you explain what's going on or you're not allowed to? I don't know what specifically the Google Brain people were talking about there. So that's kind of, I guess, on the side. But in general, we do use kind of this information for evaluating algorithms. So that's the kind of thing we've been talking about in the past as well. We do experiments all the time with regards to the search results, with regards to the rankings. And part of the way to do experiments is to see how people react to those experiments. So that's something that, from that point of view, I think is kind of nothing really new. Right. So again, there's one thing is using click data to evaluate how your rankings are working. And two is a separate thing is to use click data to rank search results directly in the search results algorithm. So page rank, links, certain factors are used directly in the search results to do ranking. And certain things are used after the fact to say, hey, did this algorithm do well by measuring either using the search quality raters or using click the rate on search results to see how well a certain algorithm is actually interacting with the searches. Those are two different things. Yeah. So at least the raters, the way we use raters, for example, as far as I know, that's something that we do before we actually roll that out in the search results to kind of evaluate which of these algorithm variations is working best. So usually it's something like someone comes up with an idea and is like, we'd like to roll out this algorithm. And we pull out queries where we think there are significant changes, and we have those reviewed by people who are kind of objective with regards to the search results. And based on what we get back there, we might decide, OK, we'll do a really small experiment and see how users actually respond to that. And that's kind of the data that we would use to figure out, is this algorithm performing the way we expect it to perform, or is it not performing well? Does it make sense to go down this road, or does it make sense to tweak the existing one, or does it make sense to turn that off? Those are kind of the discussions that we'd have there. So to be 100% clear, Google is not using click data, behavior of data, Pogo-staking, that type of stuff directly in their search ranking algorithm for rankings, websites, and their search results. As far as I know, we don't use that. Thank you. I'll pause and continue later for other questions. OK. So you can delete that recording now. Well, recording is a recording, but. No, I'm just making decisions. Sorry. I mean, there are lots of people working at Google that work on various things. So it's sometimes hard to keep up with who says what and how they're actually involved in search. So that's kind of tricky, too. I also heard in some webinar that local AdWords helps your Google map rank number one. I doubt that. I mean, I have no insight into both AdWords or Google local rankings, but I kind of doubt that. But yeah, OK. Lots of stuff is online. That's kind of weird. All right, let me run through some of the questions that were submitted. OK, this one isn't much easier. We're seeing ongoing fluctuations for about 20% of the landing pages on a website. It's happening on a keyword level. So URL will rank well for some keywords, but for others, they'll jump weekly from page one to page six or unranked and then be back on page one. There is no visible duplicate or canonical issues. It's been ongoing for over two years now. And before that, there were some large-scale migration work. Do you think the migration effects could be still being felt or something else at hand, maybe something backlink related? So I really would probably need to know more about what exactly you're seeing there and what you're looking at. It's really hard to say what might be happening there, just based on action. On the one hand, for example, I don't know how you're measuring these things. So if you're doing something like using a third-party tool to scrape the search results to figure out where a site is ranking, then sometimes you might see fluctuations that are based on various other things kind of involved there. So that's something where I'd be kind of cautious about looking at it like that. On the other hand, if you're looking in Search Console and you're looking at the search analytics data and you're seeing things fluctuate there, then that would be something that users have actually seen. So kind of these theoretical scraping tools, the rankings there, I wouldn't focus too much on those because if users don't actually see those rankings, then that doesn't really have any effect on you anyway. But if you're seeing it in search analytics, then that's something that users have actually seen. And if you're seeing strong fluctuations there, then sometimes that's just a sign that things are being fairly personalized in that perhaps some users are seeing the page very visibly. Some users are not seeing the page very visibly. And that can be completely normal as well. This is particularly noticeable if you have really small number of impressions. So let's say only 10 people are seeing your website in the search results. And half of those are seeing it on page one. Half of those are seeing it on page five. Then obviously that will look like things are bouncing around quite a bit. But if you're talking about five impressions, then that's something that's kind of like in the noise for most websites. So that's kind of what I would try to figure out first. Is this really something that is strongly visible? Or is this something that's kind of in the noise? Or is this something that's more of a theoretical type situation that doesn't actually affect anything on the website itself? With regards to large-scale migration work that was done over two years ago, this would not be related to that. If you do large-scale migrations on a website, then sometimes it takes a bit of time for us to recrawl and reprocess the website. But usually that's a matter of maybe, I don't know, half a year or up to a year at the latest with regards to us being able to actually reprocess the whole website. And usually after a fairly short period of time, maybe a couple of weeks, you'll see things kind of stabilize already. And it's just that long tail takes a bit of time to actually settle down again. So this would definitely not be related to any large-scale migration that was done over two years ago. Backlink related is a question there in the question. I don't know. I would assume this is not something that's related to anything from the links. What I do in a case like this is maybe post in the Webmaster Help Forum or get some input from other experienced webmasters and show some of the data that you're looking at, show some of the queries that you're looking at, the results you're seeing, the changes you're seeing in the search results so that people can give you a bit more targeted advice to say this is something you need to worry about and you need to fix. Maybe there is a technical issue on your website or they can tell you this is kind of just noise. You can ignore it completely. How descriptive should an alt image tag be and is there a maximum character length? Is it a case that the more descriptive you are, the better? And can this help with regular ranking as well as with image search? So the alt attribute, not a tag, is basically a way of describing what the image is. So in particular, you should look at this as something where if you can't load the image, then the alt attribute gives you some information about what would have been there. So it's not something that there's a limit, like a minimum or maximum limit that you need to kind of stick with. Sometimes it makes sense to have two words, sometimes it makes sense to have three words or four words or a sentence or something like that. And I focus more on that aspect than on any kind of ranking aspect there. So we do use the alt attribute for image search, particularly that's where it's really helpful because that gives us a lot more context about that image. So I have it slightly descriptive so that it's obviously clear what image that should be. On the other hand, we also use that as a part of a page for web search, but for web search, in general, we can pick up enough other content from the page that we don't need to rely on the alt attribute there. We can pull out information from the rest of the page. So essentially the short answer is it depends and you should probably just think about it as if this image weren't visible, what would you kind of want to tell the user about that? I've read that adding no index to low quality pages can improve your Panda score and rankings of remaining pages improve. Our e-commerce site has some categories with products which we never sell, but we have them there. So to show the whole range, is it worth no indexing these pages and could that have a positive or negative effect in Google's eyes? So we don't have something like a Panda score where you can just tweak things and we will change the score on our side. So that's kind of one thing to keep in mind. In general, the aspect about no indexing, low quality content is kind of to make sure that your website as a whole, the way that we index it, is providing high quality, unique and compelling content. So whenever your website shows up in search, when users click on that page, they actually find something really useful there. So that's kind of our goal there. So if you have pages that you know are really low quality, then on the one hand, you can just improve those pages. That's kind of the best thing to do there. If you see people going there and you know these are low quality pages, then it's kind of a missed opportunity to not improve those pages so that people actually stick around on your website and convert and buy something. So that's usually what I'd recommend first. On the other hand, if you can't improve the quality of those pages overall, then using a no index is another option. By with that, you're basically saying, I have this page on my website, you can navigate to that page perhaps within your website. You want to find it on my website, they can, but I don't feel like it really should be highlighted in the search results. And that's fine. That's essentially your decision to make. If a user lands on my website via some query, reads an interesting and compelling article and decides he wants to look at the other results for that query as well, does that mean my page is low quality? No, why should it mean that your page is low quality? I mean, that's kind of normal on the web. And depending on the type of content that you have on your website, some people spend a lot of time to read it. Some people kind of click away very quickly. And just because people are clicking away or doing something else afterwards doesn't necessarily mean that your page is bad. Obviously it depends a lot on your website. It depends on what you're trying to offer on your website, what you want people to do. If the goal of your website is to convert them into a subscription or something else or to sell them something and they don't stick around on your website, then you haven't kind of led them to that goal properly. But that's more of an issue between you and your users and what you provide on your website. And less of something that we would worry about with regards to search. Would you guys appreciate chatbots in the future to kind of keep the users more? I mean, will help kind of, is that something that you guys want to see eventually in the future? I don't know. Most chatbots I see are really basic and useless. I wouldn't like try to artificially change these metrics because you're just kind of short-selling yourself by artificially keeping people on your website even though they don't actually want to buy anything or convert them, you're not doing anyone a favor. It's, I mean, if you can add value by having a chatbot there and if you know users have questions that you can't answer proactively within your content, then maybe that's an option. Maybe there are certain strategies that you could use to kind of show it at the right time to the right people. But I don't think just randomly adding chatbots to page makes that page better. Right, okay, thanks. Are links in the main navigation treated in the same way as links in the footer given that they're site-wide? We have a few popular products which we want to give a little prominence to so we don't know if it would be better to put them in the navigation or in the footer or in both. What's the best approach there? So essentially what happens with regards to webpages is we try to figure out which part of the page is the primary content and which part of the page is the boilerplate. And the boilerplate for us is kind of everything that doesn't really change when you visit different pages on a website. So that includes the navigation, the header, the footer, everything, usually around the primary content. So if you put something into the boilerplate, then from our point of view, that's kind of the same. So if you have it in the sidebar and in the footer, that's kind of the same thing for us. In general, if you think that something is important and you'd like to link to it from different parts of your website, that's perfectly fine. I think that's a good strategy. So if you have a few popular products that you want to give a little bit more prominence to and to kind of show them, highlight them to users, I think putting them in a navigation or in a footer or wherever you think users might find them is definitely a good idea. We filed a spam report some time ago but haven't seen any change. Does every report get manually checked? And no, we don't manually review all spam reports. So this is something where we try to figure out which spam reports have the biggest impact and those are the ones that we focus on, those are the ones that the web spam team would then manually review and kind of process and perhaps take manual action on. And most of the other ones that we see tend to be things that we can collect over time and that we can use to improve our algorithms over time. So that's usually where we think the more value out of these spam reports in the sense that instead of taking action on this one page that's doing something wrong and doing that manually and kind of following through with that one specific page that was reported if we can take that information and apply it to a whole bunch of pages or the rest of the internet and really recognize these issues a little bit more in an automated way then suddenly that small spam report has a lot of value. But that's not something that you would see from one day to the next. That's something that usually takes quite a bit of time for us to actually go through those and to make sure that based on these reports that we've been seeing where like algorithmically handling this type of spam or this type of abuse or this type of issue across the whole web in a little bit of a better way. So that might be what you're kind of seeing there. John, can I ask a question? Sure. All right, so I think back in November you guys said or somebody said that, me and Miley said that if you have a desktop only site and no mobile site but you do have AMP, Google will index when the mobile first index rolls out the desktop version of the site. They might link to the AMP version but they wouldn't index and understand the signals from the desktop version because they're not gonna do that with the AMP version for some reason. I think Gary just said yesterday or a couple days ago at Brighton SEO that if you have an AMP version but no mobile version Google is gonna look at the AMP version to index. Yeah, I think that's a tricky situation on the one hand and probably more of a theoretical situation in the sense that if you're implementing AMP then you basically have a responsive site already with the AMP content there so it will be kind of weird to not have something mobile friendly elsewhere on the website. So that's kind of, from my point of view it's almost like a theoretical question like if you had this weird setup where you're doing a lot of work to improve on your website but actually you're not doing that basic work as well then like what would Google do? The other part is I think also a question with regards to how that might make sense in the sense that if we start to see that actually the AMP version is a version that everyone sees on mobile maybe we should just assume that that's kind of the mobile version of that page and use that accordingly for mobile indexing. So what I would do in a situation like this if this were my website and I really couldn't set up a normal mobile website I would just link the AMP version as the mobile version of the page. So that's something you can also do basically use the separate URL way of linking smartphone pages to your desktop page and link to the AMP version like that. So you can have both link rel AMP HTML and link rel alternate to the AMP page and then suddenly that AMP page is the mobile version of your page. So theoretically just to be clear you have a responsive site at AMP Google's gonna index on the mobile first index the responsive site not the AMP version, correct? That's the goal, yeah. And if you have a desktop only site and for some reason you're crazy and you have an AMP version and you're not using the way to link say this is my mobile version this AMP version my mobile version you're not doing that but you have a desktop version and you have an AMP version Google's gonna index the desktop version with the mobile first index. I don't know if we've actually defined that completely yet. So like I said, it's a very theoretical situation and I could see arguments going either way. So that's... Okay, thank you. John, in Barry's scenario what if you use, so let's say you have a desktop version and AMP version as the mobile version but you use dynamic serving how do you use the canonical and realm equals AMP HTML and things like that? Or you don't need to. I think if you're using dynamic serving to serve an AMP page you probably will have problems getting that recognized. Oh, okay. Because we crawl the AMP page I think with the normal Google bot and if you're using dynamic serving then we would never see the AMP page and then it would be kind of confusing for us. Okay, so you're not crawling with a mobile bot? I don't know for sure but I believe we crawl the AMP pages with the normal Google bot. Okay. It'd be easy to test. So try it out and see what happens but because of that it makes it really hard to have something dynamic serving with an AMP version as well and this gets even trickier when you're talking about non-Google clients so if Twitter is looking at your page and wants to pull out the AMP version of your page that like depending on what parameters you use in the user agent you suddenly get an AMP page versus a normal HTML page. Yeah, okay, so don't do it. I devoid that, yeah. It just makes everything a lot more complicated than it needs to be. Okay. All right, let's see, more questions here. We updated our schema markup two months ago and it's showing no errors and it's being indexed incredibly slow. I had my markup checked on the forums and the experts said it was all good. We've got approximately 2,000 pages and 1,200 products but for some reason it's only indexed 320 products in product markup, up from about 182 months ago. Previously it was indexed very quickly. The other markups are also not getting indexed quickly anymore. It's hard to say what might be happening there. In general, I would use these reports in Search Console as a way of recognizing trends with regards to how much we're actually picking up from these pages so I wouldn't assume that if you have marked up your website that you'll see that number going up to exactly 1,000. It's likely that we focus on it's kind of the more important URLs within a website and for that we might be showing a slightly different number if we've already reached all of them. So if it leveled out at 320 and you know there are 1,000 products then my guess is we probably have picked up all 1,000 products and if you do something like a site query for individual products in there you'll probably see that markup in action. My competitors have tons of backlinks from widgets in other websites. This is supposed to be violating Google's guidelines but they remain in first and second position for many years. We have a much better website but we can't compete with a backlink profile unless we also do that tactic. What should an SEO manager witnessing blackout actions do besides waiting for an algorithm update? So on the one hand you can submit spam reports. Depending on the situation we will look into those and take action where appropriate. On the other hand we do have a lot of algorithms that try to recognize this type of situation and just ignore those kind of links. So what might be happening there is that these websites are ranking fairly highly despite having all of these widget backlinks there. It might also be that they're already disavowing those backlinks and they're actually by design not taken into account and rather these websites are just ranking there for other reasons. So my recommendation there would be to on the one hand submit a spam report if you think that this is really spammy and a problem. On the other hand focus on your website instead of focusing on what they might be doing. So instead of just trying to do what they're doing one to one and following them focus on your website and what you could be doing to kind of significantly take things up another level. And I know that's sometimes hard. Especially when you have competitors and they're very visible it's sometimes hard to kind of take a step back from focusing just on what they're doing but essentially that's kind of the long-term strategy that works on the web. Anything that you do on your website to make your website even stronger and better is something that you can keep. Whereas if you focus too much on their website and you analyze their website into tiny bits then that's something that doesn't add any value to your website. To rank for different keywords on the same page can you give any advice for how to best do this? Is diversity of internal anchor text going to that page just as important as on-page content itself or does external links rule? I'm not quite sure what this question is trying to ask. You said last time add the keywords to the page, make it clear what you're selling. You said that in one hangout, I think. Very cool. I mean you can definitely have one page that ranks for different words or different keywords, different phrases. That's completely normal. That's kind of the way things work. You don't just put one word on a page. You have information on the page and that information can be relevant for a whole list of different terms and phrases. And read the page to yourself after and see if it makes sense. Yeah, yeah, definitely. Read the page aloud. Make sure it's kind of natural text. Just focus on the text itself. That's kind of the main thing I would do there. And also kind of keep in mind what you're trying to do there with users. Why should users go there? And how can you make that clear? What are you trying to offer them? Act on that. And with regards to links and things like that, if you have a website set up that links to different pages within your website, then use a link that makes sense. So don't just use something that has a nondescriptive anchor on it, but rather use a link that actually is descriptive for the page that you're linking to within your website. So for example, one thing we often see is something like you can find more information here. And here is the one that's actually the link to the page within your website. And here is not a very descriptive anchor. It doesn't really tell us a lot about that page. On the other hand, if you have something like you can, on our website, you can find more information about this specific topic. And you link that whole thing to that other page, then that gives us an anchor to work with, something that helps us to better understand the content of the page that you're linking to within your website. And I think someone from the HTTP archive team just did an analysis of the various anchor texts that were used on a website. And some of the ones that are most common were really kind of surprising to me. So for example, I think the most common one that he found is like a little triangle thing, which is a lot of websites to kind of open up different categories on a page. So interesting. OK, we use a third-party Google-approved company to collect reviews. Over a month ago, we implemented an API to mark up those reviews with a review schema, but nothing is showing in the search results. Even though these have been done correctly, I've seen other sites who mark up fake reviews, which are shown in snippets, but ours are not. What can be done here? So what I'd recommend doing in a case like this is, first off, getting kind of someone else to take a quick look at what you're marking up and how you're marking that up. So one way to do this could be to post in the Webmaster Help Forum or to go to another Webmaster community and get some input from other people to kind of gut-check the way that you're marking up things on your website, and maybe to gut-check some of the things that are common mistakes that are done. So for example, one common mistake that we sometimes see is when the business itself is reviewed, and that review is marked up on all pages of the website. So we really want to make sure that the structured data that you have on your page is relevant to the primary content of that page. So if you're selling, I don't know, lawnmowers, for example, then that review markup should be relevant to that lawnmower, which you have on that page, not specific to your company. So that might be one thing to watch out for. Another thing that I've noticed that people sometimes have problems with when it comes to implementing structured data markup is once everything technically passes through the validators, it can still be the case that our systems say, we don't really know if we can trust this website. And in cases like that, we might not show the structured data in the search results. One way you can sometimes test that is to use a site query for your website. If we're showing the review markup in the site query, but not for normal queries, usually that's a sign that our algorithms are kind of like, I'm not so sure about the quality of this website overall. And that's the situation where you can kind of take that hint and say, OK, I'll significantly improve the quality of my website, or I'll try to find ways to do that. Maybe through user studies, get input from other people, to figure out what we can do to significantly move things forward. What would you consider best user experience from Google's user's point of view, finding what the user was looking for in one click? I think that really depends on what the user is trying to do. So I was talking about this with someone else a while ago today. And it seems that users do very different things on Google. And depending on what they're trying to do, one approach might make more sense, and another approach might make more sense in other situations. So if you're trying to figure out what the time is in New York City, then maybe just getting that time directly in the search results is great. On the other hand, if you want to figure out which car you want to buy or what hotel to take in New York City, then you probably want a variety of different results that you want to open up in different tabs or open up separately and go through that and search in different ways to refine things until you have more and more information about that topic so that you can make a decision. And those are very different approaches that users might have to actually finding information out there. So it's not the case that I'd say there's one metric that we should focus on. And this is the only metric that really counts. But rather, users come with a lot of different expectations. And they want to get help on whatever problem they're focusing on. So another thing that I sometimes run into is I don't know how to spell different words. And sometimes I'll just throw them into Google and see which of these spelling variations has the most results. And I will assume that probably that's the correct spelling. And in a case like that, I don't even want to look at any of the results. I just want to see, am I approximately getting it right or wrong? So it's really sometimes tricky to say, this is exactly the metric that Google should be using to optimize for us. How important is a sitemap page on your website, not a sitemap XML, but one for visitors? And what's the best way to implement it? Should be no index. I think it really depends on your website. I think the situations where you can make one HTML sitemap that covers all pages within your website is kind of limited in the sense that a small website should have a clean navigation anyway that make it possible for people to find all pages within your website quickly. And larger websites are so big that having a single HTML page that has all of the links to all of the pages doesn't really make sense anymore, because it's not really usable. So from that point of view, it's something I'd recommend just really thinking about your website yourself and trying to figure out where are users getting stuck and what you could be doing to help them to find the information they're looking for. Sometimes that takes some user studies, watch people in person to see how they interact with your website. Sometimes you can get this information from analytics where you see kind of people wandering off in your website or going to your search form and typing different things to search for within your website. So that's something where I think there is no one-size-fits-all approach with regards to a HTML pipeline page. Talking about a one-size-fits-all approach, is there a one-size-fits-all approach to the top three rankings in Google? Rankings, singles, and Google? No. It's the same way. It was easy. Wow, that's just what I said on Twitter. So you actually dynamically change what ranking singles are more important based on maybe the query or the site or something like that? I think that totally makes sense. I mean, if you're searching for something like, I don't know, these hurricanes that are currently around. So if you're searching, I don't know, what is it? Maria at the moment, the new one. If you're searching for that, you probably want something very different when you search now than if you searched maybe a month ago. And from that point of view, it makes sense to use vastly different ranking signals for those kind of queries at this moment if they're very different topics. So now I tell you, all I get is just everything about Hurricane Maria. There's no nothing about Hurricane Maria. But if you search for it, I don't know, a month ago, then it's like, there's a hurricane coming up called Maria, and you're like, what is this? What am I supposed to do with this? It's not useful information. So it's kind of the most basic situation, I think, where something is kind of common that people search for it anyway. But depending on the context, it makes sense to put completely different search results together for a query like that. And finding that balance is really hard and kind of figuring out when it makes sense to show more shopping results versus more brand results or more kind of general informational results. That's, I don't think there's one set of ranking factors that we can just kind of say, well, these are the top three and they apply to all search results all the time. Right, so let's just make it a little bit more interesting. So back in the day when Tim Meyer was running Yahoo Search, that cuts us on the panel. I think it was probably like 2003 or something and or maybe 2005, whatever. He's like talking about spammers and stuff like that. And you might remember this quote. He said, you don't bring like a knife to like a gunfight. And that kind of implied back then that you guys are using different algorithm more spammy industries as well. So if you're doing a search for let's say, I don't know, adult content or certain types of drugs or something like that. And it's very spammy in nature. Do you guys like saying links are maybe not as important in that space? Probably. Probably what? Probably, I don't have exact answers there, but I think that would totally make sense to kind of treat those kind of search results in different ways. And it's something that I always see as well with regards to, for example, smaller businesses where maybe small businesses will be trying to compete for the same kind of terms. And one business goes off and does a lot of work with getting links from people who are talking about their product. And the other business does something completely different and doesn't focus so much on links. So on the one hand, there's one page that has a lot of links. On the other hand, there's another page that has a lot of other positive signals. And we have to find a way to kind of compare those and rank those. We can't just say, well, links are important. Therefore, this one will always be on top. Maybe sometimes this one will be on top. Maybe sometimes the one with the links will be on top. That's something where, because we have so many different signals, we can combine them in different ways. And depending on the query, on the context, sometimes we take more like this, sometimes we take more like that, that from my point of view, makes a lot of sense to focus that more on like all of the things around that query to try to figure out what is relevant for the user at this point in time, which might not be the same thing as a later time, which might not be the same thing as for other queries, which might not be the same thing for other users. Bottom line is you need all the multi-wide events. You have to cover all 200 factors to succeed in Google. No, seriously. We try to figure out what makes sense. So from that point of view, it's also even more the case that like if your competitor is doing this, then you don't have to do exactly that too. You can focus on things that make your site better in your own way, and that kind of balance things out as well. Yeah, I think the index actually just announced a few months ago that in certain types of industries, they're not gonna use links at all because they just, it pollutes their algorithms. Not that Google would do that, but I'm just saying it's interesting that they would say that. Maybe Google says in certain industries they look at links a little bit less. So you're kind of implying that might be the case. I don't know. I think that would totally make sense from my limited point of view. I don't know the specifics around what Yandex recently announced there. I believe they did something similar a while back for some kinds of local results in some areas. So this is something from my point of view, it definitely makes sense to look at different factors in different times. And sometimes that can play a stronger role. Sometimes something else can play a stronger role. Sometimes we have both of those in the same search results and we have to figure out which one of these do we weigh stronger at this time? Which one of these do we look at stronger later on? So that's, from my point of view, that's kind of the way, the reason why there is no simple way to say these are the top three ranking factors at Google because it depends so much. Plus the last guy that I said that hasn't been talking much since, so. I'm just joking. Yeah, I think he's doing other big things though. So it's not that we sent him off to Siberia unless he's going there for a business trip, but that's something different. I don't have a question if you don't mind. Sure, go for it. So I have a classifieds website and I have 20 pages for different cities, 20 different cities, but only eight cities are profitable to my business right now. So should I assume that if I no index the other 12 cities that are not generating any profit to me, or if I just remove the internal links, am I improving my ranking or the eight other cities that are improvable? Not necessarily, but it's something that depending on what you're doing there, it could theoretically help. So if, for example, you have different cities and you can fold them together into one stronger page and use a rel canonical to say these five individual pages can be combined together in this one regional page, then obviously that regional page will be a bit stronger. So that can certainly help there. On the other hand, if you just say, well, I have five completely separate things here and I'm just going to hide four of these, that doesn't necessarily make that one page that's remaining much stronger. Okay, so it could create just a hub for all the cities that are not profitable and put them all together there. Maybe, yeah, that's something you could look at. That's something where you could also look at how people are actually using your website if they're going there or not. Like if everyone is going to those pages that are not profitable to you, like maybe you should change something with regards to how you monetize those pages. But that's a tangent there. Okay, cool, thank you. All right, could you tell me, let me know if there are any implications to crawl an indexation if a page or resource returns 206 okay response code. Does Google have any particular dislike for partial content on my site? So let me just double check that I have it right. So as far as I know, this is basically a response code that says we don't actually have the full content for page and we follow that response code and we try to figure out what the full content of that page is so that we can actually index the full content of that page. So it's not that we do anything kind of unique for that HTTP response code. We just try to follow essentially what the HTTP standards are and kind of act like a browser would when it runs across a response code like that. What percent of search is via voice search? I don't know if we have any public numbers on that. So I don't know if I have anything that I can share there. Let's see, the phenomenon on my client site, the website positions haven't changed much but for our main keyword, the site sometimes appears and sometimes not. The ranking position of the main keyword doesn't change. It disappears and returns most of the time. This phenomenon began about two weeks ago. There's no error on the website. The website webmaster tool looks normal. How can this happen? I don't know. This seems like something where there's almost more of a technical issue on a website where maybe something is a no index briefly or looks to us like a soft for a four page and then we submit it again and it shows up again. Those type of situations are sometimes tricky to recognize. So especially if there's a no index there then Search Console in general wouldn't highlight that as an error but rather it would say, well, you have a no index here so we will drop this page from the search results. And particularly if you're saying that the page disappears completely from the search results then that sounds more like either a no index or we're recognizing a soft for a four situation sometimes and dropping the page from the index because of that. So that's something that's sometimes a bit tricky to figure out. What I would do there is maybe also post in the Webmaster Help Forum to get some tips from other people. With regards to soft for a four, one thing to watch out for is if you have different components on your webpage that are generating the HTML and if some of those components are generating things that look like server errors and we might assume that the page is actually a server error. So if you have something for example, I don't know, a comment section on your website and sometimes it doesn't load and it returns an error that says, oh, this page couldn't be loaded or this content couldn't be loaded then we might look at that page and say, oh, there's this error page on the bottom here. Maybe this whole page is actually an error page and we should be dropping it from search. And usually those are the situations where we would show that in Search Console as a soft for a four. So that's one place you could look in Search Console for that specific type of issue. If you don't see it as a soft for a four then most likely it's more something like a no index or maybe a four or four or something else where you're returning something that's causing our systems to drop the page from the search results. If you are kind of stuck and really have no idea what that might be, you can on the one hand check your server logs where you should be able to see whether or not the content returned to Googlebot is the same if it's consistent size and what the result code was there because usually if there's an error page that's returned then usually you'd see a significant change in the result size that you've returned. So you can track that there. And if you can't figure it out at all you can send me the link on Google Plus and I can take a quick look to see if there's anything particular that I can highlight to you about that there. A question about the fetch and render tool. If a site isn't being displayed correctly in both Googlebot and visitor window, for example, one of the slider images above the fold is taking up the whole screen on the render and none of the rest is being displayed. Could this cause issues with the site's SEO performance? In general, no. So if we can still render the rest of the page that's perfectly fine. One thing to watch out for there is for fetch and render we render the page with a really high viewport. So something, I don't know, like 9,000 pixels or something similar to that. So if you're using CSS that's adapting the image size to the height of the viewport then you might run across this issue and it's not so much that you'd see an effect in search from that because of a high image but more that it just makes it harder for you to diagnose if the rest of the page is actually loading correctly. So like in your situation here, you don't really see for sure that the rest of the page is actually being loaded properly because you have this really high image on top of that. So if you had that image smaller then that would be easier to double check and say, oh yeah, the rest of the page is actually loading properly. Hey, sorry, John, can I ask a question before we're around on time? Sure, go for it. So we're seeing a lot of renowned companies blocking their API endpoints through robots.jxt. And we wonder how does Google discover those links and what's the effect of blocking it? Does it leave more crawl budget for the rest of the page? Sometimes. Sometimes that helps us to crawl more from the rest of the website. But the thing I found where signs of blocking this by robots.jxt is usually more with regards to that we don't want random people using our API. So that's kind of the effect that we're seeing there. The effect in general for us is more that if other pages rely on this API to actually show content, then we wouldn't be able to show that content in our index for that web page. So for example, if you have a Maps API and you're blocking that by robots.jxt and another website is using the Maps API to pull out a list of the locations that it has, then those locations might not be crawlable for us because we can't access the Maps API because it's blocked by robots.jxt. Say if my site is fully server-side rendered, does that mean blocking API from Google wouldn't hurt our, I guess, our index? I mean, if we don't need to access the API to get the content for your pages, then you can block it by robots.jxt if you want to. Usually what will happen is we'll list that in Search Console as a blocked resource, but if you know that this doesn't change anything on your pages, that's perfectly fine. So for example, a situation where it's like absolutely no problem to block things is if you have a counter on your website or if you have kind of a tracking pixel or something like that and you have an API for that, then obviously blocking those API calls doesn't change anything for the rest of the page. On the other hand, if you have an API that pulls out the primary content of the page and you're blocking that, then suddenly that primary content is not indexable anymore. Got it. I also have a question regarding Google Mat in Mobile First. So we're trying to prepare for Mobile First and our site is a real estate website, so we're trying to show listings on Google Mat for users. So usually on a mobile web device, when you show a map, it basically takes up the entirety of your phone screen and therefore the user will not be able to scroll the page down because he's basically panning the map. Then what's the best way to make, say, the listings and markers on the Google Mat crawlable in the Mobile First world? That's tricky. I don't know. I don't think I have a perfect answer for that. So probably what you'd want to do is think about whether or not it makes sense to have those listings that are visible on the page also available below the map somewhere in the form of bullet points or some normal text type listing. Even if the page is not scrollable? I don't know. I don't know how that would be implemented. Yeah. We're trying to think of something maybe hidden somewhere because we heard a hidden con in mobile web sometimes not being penalized. Yeah, I mean if you have something like a hamburger venue that also includes this, that might be an option or you have the ability to switch between the Maps view and a listing view, that might be an option as well. It's sometimes hard to balance the UI limitations on a mobile device with making sure that everything is really crawlable and indexable. It keeps it exciting. All right, more on that. Yeah, on that note, then what date do you expect? I mean, I know a lot of people are probably asking even offline, what date is Mobile First coming? I don't know. What shall we pick? I'm not asking it because I want to know or I'm ready to go for this. So just so we can kind of, all SEOs around the world will know, kind of just maybe. Yeah, I don't know. I don't have a date. So this is something where I expect from the mobile indexing team that we will gradually roll this out. And what is the current plan is that we try to recognize sites that are ready for this. And if we can recognize that they're ready for this, that there are no issues being caused by going to mobile indexing, then we will switch them over. And that's something that's going to happen gradually. So some sites will be switched over. And if we think that nothing problematic will happen, they might not even notice that this is actually happening for their site. So like the Google reviewers, they just just pretty much check all around and then report it back to you? We try to do this in a scalable way. So we can't really look at these pages manually and say, oh, this is good, this is bad. But rather, we create classifiers to figure out when the site is ready. And we try to evaluate and test these classifiers Are we picking up the right signals? Are we looking at the right things? And based on those signals and the classifiers that we come up with, that's also what we are going to use for our general guidance with regards to what people should be watching out for or what they should be doing when it comes to mobile indexing. So first of all, we try to figure out what is actually relevant, not just what we think theoretically might be relevant. And based on that, we'll kind of create some more public guidance on that. OK, thanks. Hey, Josh. Hey, John. Hey, Josh, may I? Yeah, please go ahead. Actually, we have implemented AMP a year back ago. So the thing is that the indexing count of AMP which is shown in Search Console is very low. Compared to the indexing of pages, normal desktop pages, it's very low. So what might be the possible reasons our AMP pages in Search Console, it shows very low? Hard to say. On the one hand, like I mentioned before, I wouldn't focus so much on the absolute count. If you're seeing this trend that the numbers went up and then they stabilized, then that's probably OK. Especially if you're looking at something like a site query in comparison, then that's something where I wouldn't expect that kind of graph for the AMP pages to go up to the same number as a site query, just because a site query is kind of by design such an approximate number. The other thing I would kind of watch out for there is just the general AMP errors. So in general, if we can index a page and recognize that there's an AMP version for it that's valid AMP, then we'll pick that up and use that for AMP. So if we're looking at your pages and we're indexing the pages normally, the new ones that you have, and we're not recognizing the AMP version there, then probably there's something technical that's kind of blocking us from picking up that connection to the AMP page properly. So the thing is that it's creating a kind of wave. It means suddenly it will rise and suddenly it will drop. So in Search Console, we are seeing the flow like a wave. So yeah, I don't know what I can say there in general. So I think on the one hand, this is something it might be interesting for the AMP folks to look at. So maybe you can post in the what I'm trying to help form with the screen there. I think our tech team has been in touch with Google, but we didn't got any satisfactory answer from them also. So if it would be better, I can message you our site and if you have time to look it into it. I can take a look, but usually the AMP team is much better in this situation to kind of diagnose these things. The other thing that I'd just like to mention as well is that sometimes these graphs do go up and down. And there's no specific reason for that in the sense that we don't index all pages on a website. And even for a website that doesn't change at all, if you look at something like the index status report, then you might see things going up and down. And that's kind of the normal flow how our algorithms adapt to your website, how they respond to the rest of the web. And sometimes those graphs just do go up and down. And it's not the case that there's anything particular that you're doing wrong. It's just sometimes we have time to index more. Sometimes we don't have that much time to index so much from this website. These kind of changes are kind of natural. A small question. So should we have a different AMP version for the pagination as well? Means page two, page three like this. Currently we are sending the main canonical AMP version to other pages. So essentially the AMP version should be equivalent to the other version that you have. So if you have pagination on the other version, then I would do the same thing on the AMP version. The one thing I would definitely avoid is that you have smaller AMP pages than you have than the normal pages. So what I sometimes see is you have kind of like a preview on the AMP page and the link on the bottom for the full content, visit our desktop page. And that's a really bad user experience. But if these pages are equivalent, if you have page one, page one in AMP, page two, page two in AMP, then that's kind of normal. That's something where page two might rank in the search results. And it would be helpful for us to have page two in AMP form as well so that we can kind of swap that out for the user when they click on page two in the search results. Okay, thank you. Thank you very much, John. Hey, John, can I jump in with a quick question? All right, go for it. Thanks, John. As you know, I do a lot of penalty recovery and SEO audits and stuff like that. And I've been emailing you a few that come up that I can't handle. But I have one right now that I was wondering about the disavow file. Does the disavow file also cover 301? 301's from like spammy sites or from crappy sites to target pages? So how do you mean? So spammy site, 301's to the good side? Yeah, yeah. So one tactic some of these people are trying is they buy an old domain and they make it really spammy and gross looking with all kind of nasty content. And then they 301 that to a target page to try to do some damage. Should I, would a disavow cover that? Do you guys care about that thing? I would just submit it as a domain into disavow file. Yeah. Okay, so that would kind of break the association between the backlink and the target. Great. And is it important still for you guys to crawl the backlink page for the disavow file to become updated? It's kind of like an if loop. We drop the link when we've recrawled or reprocessed the other page, yeah. Right, okay, so that's still the way it works. So I guess on backlink pages that are on some really spammy site that you guys don't bother visiting for every six months, it's really important to get you guys to respiter it somehow. Usually not. So I guess that's kind of the place where it's tricky in the sense that if we don't bother recrawling it a lot, then it's not gonna have a lot of weight anyway. So that's something where if it takes us six months to actually look at that page again, then probably the value of that link is so small compared to everything else that it's not even worth putting it in the disavow file in the extreme case. So that's something where from my point of view, if you do put it in a disavow file, you definitely don't need to send Googlebot over there with a link to that spammy page so that it crawls that faster just to get that reprocessed faster because probably it's so minimal already anyway that it's not worth kind of focusing even more on that. Okay. Obviously that's hard for you to kind of make that decision. Like, does Google crawl this often or not? It's like, you don't really know. So if you would just put it in a disavow file, then if it's important for us, then we'll re-crawl that fast anyway and we'll pick that up in a disavow file. If it's not so important for us, then it's kind of like, well, you've got it covered on the safe side. If it suddenly does become important for us, then we'll make sure not to count that link. Right. But what if there's like 140,000 of those kinds of pages? Are you saying that it's like they have like a .001 effect or are you saying that they have zero effect and don't worry about it? Because they could add up considerably. Yeah. I don't see any situation where it's something that we don't crawl within like six months. Like even if you add a whole bunch of those together, that would cause any problems. Gotcha. Okay. Okay. Fantastic. I had one last question about quality, but I wanted to open it up for anybody else who had a burning question at the end. All right. If no person like that exists though, then I'll jump right back in. I gave them a whole three seconds. I had this question. So how often are you guys updating Panda these days? I know it's automatic. It's rolling now. Is it every six months? Is it refreshing every six weeks? How, what can we expect there if we make a huge site improvement? How long should we wait before we think, okay, I guess Panda probably recrawled this by now. As far as I know, we don't have a fixed timetable for that. So this is like one of those things that's just algorithmically updating all the time. Okay. So on crawl, is it that fast now or? As far as I know, it's not quite on crawl, but it is something that just kind of like happens when, I don't know how to phrase it. So it's kind of in between something that happens every time we look at our page and that would need to be triggered manually. It's something that does run continuously. So it's not the case that you would see a change as soon as we've recrawled a page and suddenly that page is kind of reevaluated from a quality point of view. It does take a bit of time to collect all of those signals, but it's not the case that there's like a monthly refresh cycle that happens like every Monday, first of the month or something like that. Every Monday, payday, every payday. So is it like just weeks or months or is it years? It's way faster than it used to be. Yeah, I'd assume this is more of something where you'd see the effects as we reprocess the bulk of your website. So that's something that depends a bit on the website. Some websites we can reprocess fairly quickly. Some websites takes a bit longer to actually reprocess and understand again. But the ones you crawl in the databases like news websites, which is like, and suck pretty much like every hour. It's tricky. So that's like a weird edge case as well in the sense that parts of that site we crawl very frequently, but with news websites there's a lot of content that's kind of in an archive where we think like, well, last year's news is not really going to change that much like today. So maybe we can wait until next month to actually recrawl last year's news. So that's kind of a tricky situation, but the normal kind of small business website where you have a couple of thousand pages at most, that's something we can generally reprocess within, I don't know, a couple of months, something like that. Okay, that's very interesting. Again, I'll open it up for anybody else who has a question there. All right, let me just double check other questions that were submitted. Still a whole bunch here. Is there anything in particular? So one thing I thought I just mentioned, after hacked site cleanup manual action review removal, the site experiment a six page ranking drop, nothing but 404 is left. Is that a penalty? What might be the case there? So I took a quick look at that website and it does look like they got hacked pretty severely. And in some situations, that algorithms will kind of be stuck in a weird situation in that they don't really know, is this content something that the webmaster put out there on purpose or is that something that belongs to the hack? And those are the type of situations where we really need to kind of take the time to reprocess the current website and to understand that this is actually the full website. And the hack content that was there before is not supposed to be associated with that. So sometimes that does take a little bit longer than just the manual action that's turned on and off. That's more essentially the algorithm looking at a website, seeing a whole bunch of spammy weird links on there and going like, eh, is this something the webmaster put there on purpose or is that something that was perhaps put there accidentally or by hacker or something else? Sometimes that does take a bit of time to be reprocessed. So it's not really a case that there's a manual action or a penalty involved there. Does Google crawl using HTTP2? No, at the moment we don't crawl with HTTP2. We were looking into what we can do with that. In general, the tricky part there is that for the most part, Googlebot isn't the same as a browser, so it wouldn't see the same speed effects as a browser would with regards to HTTP2. So we can cache things a little bit differently. We can do requests in a more paralyzed way. A little bit different than an average browser can do that. So we don't see the full advantages of Googlebot going to HTTP2, but especially as we see more websites implementing, I believe it's the push functionality in HTTP2 where you can request the HTML page and it includes all of the embedded content right away, then that might be something where Googlebot engineers say now it really makes sense to actually implement HTTP2 for Googlebot as well. Oh, so many things left. Okay, so another question I saw here. Google guidelines around 503, service unavailable, server messages. We're moving to a new server and we might not have the new server up in time and would like to serve for 503. So that's perfectly fine. That's a good use case for that. Ideally, you'd want to make sure that the new server is available right when you turn off the old server so that you don't have this accidental downtime, but sometimes you gotta do what you gotta do and you can't kind of change that. So that's one thing to kind of watch out for there. The other thing I saw in your question is your old servers on Apache and your new one is on IIS. That sounds like you're doing a significant revamp of your website, so I definitely make sure that you kind of take a complete protocol of your existing website and you double check those URLs on your new website afterwards as well to make sure that you really have a clean mapping from the old URLs to the new URLs and that they actually still work ideally, that they have the same content. And if you do change the URL structure, so from .php to .asp, for example, then make sure you have redirect setup for each and every one of those URLs. So that every one of those old URLs maps to a new URL on your new server so that we can really as cleanly as possible recognize that this website here is associated with this new website here under slightly different URLs. And one thing to keep in mind is if you are changing the URL structure, the URLs on your website, even just from .php to .asp, then that's a significant change for us because all of the URLs on your website have suddenly changed and that can take a bit of time for us to actually reprocess and re-crawl and re-index and kind of understand everything again. So regardless of this 503 that you're doing in the meantime, that's something where I would expect the significant kind of period of fluctuations until everything has settled down again. Hey, John, quick question. All right, is it branded searches part of the algorithm or not? Branded searches, part of the algorithm. So we have so many algorithms that it's hard for me to say what specifically that might be, but we do try to understand different types of queries and to figure out which type of results make sense for those queries. Sometimes you see that with the knowledge panel on the side where you can recognize, it's like you're talking about this entity and we have information about that entity, maybe we can show that to you directly. So from that point of view, it's kind of what you're talking about there, but it's not the case that we have like a hard-coded list of brands and if you're one of these brands, then we will treat your website differently. It's essentially something that we try to do on an algorithmic basis across the whole of the web, across all of the queries. Okay, just using an example here, I have three competitors in front of me and like assuming I have a better content, I have better backlink profile, my architecture is better, everything is better, but they have more searches for their brand names. Should we assume that if I invest on branding, on marketing, like so people get to know my name more and they start searching for my name on Google, I may pass this guys, my competitors? I don't know, I don't know, maybe. So the way I like to think about it is that if you invest in branding and you have a strong brand on your website or just any name that's really clearly associated with your website, then that's essentially free ranking. That's free traffic for your website. People are searching for you, they can find you directly. And from that point of view, that's something that's always worthwhile to invest in because if people are searching for you, that means you're doing something right. If you're able to kind of get them to search for your name plus whatever keywords and they're able to find the content specifically on your website. So it's kind of like you're circumventing the whole normal rankings by saying, well, people know me already and they're explicitly looking for me, therefore I don't have to do much more to actually be visible in the search results in a case like that. So that's something where I personally recommend to think about this from the beginning. And usually the situations where I see this as being more important is if you're just starting out and you have the choice like what should I name my website and you hear a little bit about SEO and you think, oh, keywords in your URL is what I should have. And then you call your website cheapcars.com because you happen to get that domain name. Then just because people are searching for cheap cars doesn't necessarily mean they're searching for your website because it's not clear that those that term is actually meant as a branded query or is actually meant to go directly to your website. So you're kind of missing that opportunity of being there when people explicitly search for you. One thing I kind of keep in mind, especially if you're new, if you're starting out then it's like keywords in your URL are not really your brand. You should really like build a brand first and be yourself, not your keywords. Cool, cheers. I have a question. All right. I have a question if I may. Sure, go for it. Okay, so it's about hidden and tabs content. I have a site and we use a lot of graphics. The graphics show up as SVGs, as SVG tags. And in addition to the graphics, I have a tab in which I show the data which is shown in the graphic as tabular content afterwards so that the numbers are all shown. But what I'm realizing is that Googlebot isn't picking up the SVG content. I'm not seeing any images in image search that are SVGs. And I'm not seeing any of the tabular content either in search or in the keywords in the search console that people would be searching for portions or things related to what the tabs are. So my question is what can I do about this? And I was thinking of potentially showing the tables first and then, but from a user perspective, the user prefers would prefer to see the graph other than seeing a bunch of numbers. So that's not optimal. So what would you propose? So usually with SVGs, we would pick those up for image search. We would generally not use a text within the SVGs with regards to those images. We treat them more like it's like this black box image that we found on a page. So for that, it's important to have context for that image. So something like a caption on the bottom. I don't know if you can specify an alt text, for example, for those, those might be options there. With regards to the tabular content within kind of the tab, the important part for us is that this content is loaded when the page is loaded. So not that you're doing something like an Ajax call when someone clicks on that tab and then load the text or the numbers within that tab content, but rather that it's available right away when you load that page. So you could do like inspect element, you could find that content directly on the page. And if you're doing that one way to double check that we can pick that up for indexing is to do a site query. So site colon and then your domain and some of the words from that tab piece of content there. So that's kind of a rough way to see is this content indexable at all or is it not indexable at all? And if it's not indexable at all, then usually it's more a matter of a technical reason for that and less something that I'd say, like there's a quality reason why we wouldn't show that. Okay, but the SVGs that I'm showing are shown as embedded SVG as an SVG tag. And as far as I can tell, I've yet to see any SVGs in that implementation shown in an image search. So if I had an image tag and I linked to an SVG file, it would be one thing, but I'm not linking to an SVG file. I have embedded SVG and the SVG is generated using JavaScript, but it does appear in the fetches Google and it is done as the pages is rendered and not after the fact, so. I'd love to take a look at that. As far as I know from the last time I talked with the folks about this, we would pick up SVGs that are embedded on a page as well for image search. I wonder if perhaps your implementation with JavaScript embedded SVGs is throwing us off a little bit. But if you can send me a link that would be really fantastic to look at. We also have a kind of a JavaScript working group that you can join where you can post your URL as well and kind of some of the issues that you're seeing there. Let me just find the link. Whoops, I have no idea. There we go. And I'll just put that in the chat here so that you can kind of take a look at that, join there and maybe post a URL or a sample URL and the rendering team is generally monitoring that as well to kind of see where we have weird edge cases that we aren't covering properly yet. Okay, sounds good. I assume that we should be able to pick that up. At some point, it's definitely not the case that we're saying, well, embedded SVGs by JavaScript, we will never index. It's probably just that we haven't run across a case where we're actually needing to do something specific there. Okay, thank you. Hey, John, can I ask one last quick question? All right, go for it. As you know, I'm doing a lot of client stuff and one of my biggest problems with them and I emailed you about this and I'm gonna buck you about it. One of the biggest problems is I tell them they have to improve the quality of their websites and of their webpages and I always get this back. They always say, but I looked at my competitor and I think their web page is just as ugly as mine or even more ugly. And so I'm really wishing there was some kind of resource from Google, I could point them to you to say, look, you have a, your quality index is red or whatever some kind of objective-ish kind of thing I could point them to, like for example, would you say that the landing page experience metric in AdWords is completely dissimilar from the landing page experience or any kind of quality algorithm you might have on organic or are there similarities there? I really don't know how AdWords pulls that together, so. Okay, that's there. I don't know, maybe it's useful, but I really have no experience on how AdWords pulls that together. Right, so then I guess based on past hangouts, you would say that if you wanna have a more objective-ish kind of test for quality that really we should be doing some kind of survey with customers and seeing how they kind of fill out their opinions about the website. Yeah, that's usually what I tend to focus on. But it's tricky, I mean, it's not something where you can kind of like take an objective measurement and say, well, the quality is seven out of 10, therefore, I don't know, you should move it to 7.5. But wouldn't it be nice if we could? I don't know, I don't know. I mean, sometimes it's more a matter of relevance than quality. Yes, of course. John, what about Lighthouse? Can, would you suggest using that or no, as a metric? I think Lighthouse is a great tool, but it's a tool. So it's not a quality tool, it can test for various things with regards to speed and other, I think usability things they have in there now. But I don't think it's a replacement for real people looking at your website and giving you their honest opinion. I was going to ask about Crazy Egg, which is a heat mapping software which gives you click data and scroll data. Do you think that's a pretty good? I mean, these are all tools and you're kind of the expert with your website and you know how to use these tools. So you kind of have to figure out which tool is relevant and which situation how do I interpret the results there? And that's, I think part of the reason why this is such an interesting job because you do look at things in very different ways and there's no simple like step-by-step guide to making your website high quality. True. Well, if you're canvassing for ideas, let me just say at the next meeting maybe just bring it up, you know they would really like a page rank toolbar but a quality score toolbar and the worst case scenario is all we're gonna do is make our pages better which is kind of what you want us to do anyway. So that's the only bad thing that would happen. I guarantee it, if anything else bad happens you can bring me down to California and call me on the carpet and give me help. But I'm telling you, the only bad thing that'll happen is that people will make their pages better. They'll chase that number, you know they will. And if that number is reflective of actual quality of relevant satisfaction, then that would be a good thing. So, I don't know. Yeah, it's hard. I totally get your point, totally get your point. We're fortune tellers at this point in time. Right, yes. All right. So on that note, let's take a break here before the quality goes downhill. It's been great having you all here. Lots of good questions, lots of good comments and ideas. So that's also useful for me. I hope to see some of you again in one of the future hangouts and in the meantime, I wish you a great week. Have a good time. Thanks John. Thanks John, take care. Thanks John.