 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Webmaster Office Hours Hangouts with webmasters and publishers and SEOs, all kinds of people who make websites. As always, a bunch of questions got submitted already. But if any of you want to get started with the first question or two, feel free to jump on in now. I have a couple of questions if that's OK. All right, go for it. So the first one is the new mobile phone index, which has gone live. And in the best practices for the mobile phone index, it says that the content has to be the same on desktop as it is on mobile. And if it's not, then you need to update your mobile version so that they match. If you have content that is shorter on mobile, then desktop, would you receive a penalty for that? Or would that go against your rankings? Or what's because I imagine a lot of sites out there have a lot of short content because of the user experience. And they are to content what they usually have on desktop down. Yeah, so I don't know. I guess on the one hand, people internally, when I ask them this, they ask if it really is a case that you need to provide less content on mobile. Because I assume more and more, as people use mobile as a primary device, they want the full experience. And if something is not necessary for the full experience, then why do you keep it on your desktop? So that's kind of just as an aside. But in general, there are two things that come into play here. On the one hand, when we switch sites over to the mobile first index, we do check to see if the content is equivalent enough so that we feel from a content point of view there wouldn't be any issues with ranking. So if we see that there are significant differences between the mobile and the desktop version, then probably we wouldn't switch that site over. If we did switch that site over, what would happen is we would only index the mobile content. So if there is anything on the desktop version that is critical for that site's ranking and it's not on the mobile version, then we wouldn't index it and we wouldn't be able to rank that site based on that content. So it's not that there is a penalty or that anyone manually looks at these and says, oh, it's like you missing five words here. Therefore, out of the mode you, it's just those five words won't be taken into account for indexing and ranking. And in a lot of cases, if this is just additional filler content that you have on your pages, things like your terms of service or just general descriptive content, that doesn't really matter so much for ranking at all. That's something that probably doesn't make a difference. If you have it there or not, if it's indexed or not, it doesn't really change that much. But I think kind of going back to the first one, I'd really think about whether it actually makes sense to reduce the amount of content or the amount of functionality on mobile. Because as people use mobile as a primary device, they will want the full content. They'll want the full functionality. When you said you weren't switched over, you're saying that you will actually show the desktop version instead of the mobile version when you said you wouldn't switch over. Know what you mean? It's not so much showing the desktop or the mobile version. It's just the content that we use for indexing. So if you have separate mobile URLs, then we have the desktop URL and the mobile URL. And we know they belong together. But we only index the content once. And we just swap out the URL and show the appropriate one. So that doesn't change with the mobile first index. It's really just a matter of what content we use for indexing. OK. And my second question is a quick one. There seems to be, I'm going to call it a bounce rate box now. So if you access a website and then you come back around that URL and the title in the description, there's a kind of people also searched for. Is that just directly related to the phrases? So is that directly relevant to the phrases you search for? Or is there more being taken into consideration there? Because the box is now rather than being at the bottom of the page, so around the website you've visited, which seems quite new. So what is that based on? I don't know. A lot of these search features are things that teams within Google are experimenting with to try to figure out what the right approach is, what kind of the thresholds are, where to use content, what kind of signals to use for these things. So especially the newer features, I would assume that they're very much in flux. People are experimenting and seeing which way does it make sense to show this to people. How do people actually get value out of this additional information that would provide in search? OK. OK, thank you. Sure. All right. I think someone else wanted to jump in with the question too. Hi, John. Hi. I have a question for a few of our clients. When I checked their Google Admaster to account, we got some notification like no index coverage issue. So we got a discussion like you have index coverage issue. You need to fix this. Now we are getting this issue because we have added no index tag to some pages, which are less important or which don't have too much content. So don't want Google to index all those pages. So is this bad for the website if we get this type of notification or if we are doing this tag to a page? If you're doing that on purpose and you know why they're there, that's perfect. We send out this notification because a lot of people put no index on their pages accidentally. And then we want to make sure that they're aware of this problem so that they can fix it. But if you put that there on purpose, you say this is the power I wanted, then that's perfectly fine. Do you think it's better if we remove those pages from the sitemap XML so that Google will not find those pages anyway? We'll probably continue crawling them anyway just because we've seen them before. I think if you want to remove them from search, then I would also remove them from the sitemap file just to be consistent. But that's usually not a big deal. OK. Thanks, John. Sure. Hi, John. Hi. I have one question about Canonical. Now, if we have a product category page with URL addresses like example.com slash product category with subpages page two, page three, and more, the correct Canonical should be example category or for every page, we must same Canonical. Page two, page three, what do you think? It depends on how you want them indexed. So if you don't want the other pages indexed, you can certainly set the Canonical to the first one. But if there's something on those pages that you want to have indexed or that you want to have links followed to individual products, then I would not use the Canonical tag there. OK. If we have thousands of pages, it's better if we use Canonical to the product category page because we will also our crawl digits. I think for most sites, they don't have to worry about the crawl budget, especially if you're looking at a range of, I don't know, let me make up a number like less than a couple of hundred thousand pages, we can crawl those pages just fine for the most part, unless you're on a really, really old server that's like barely able to make it through. But then you know that your server is old and slow anyway. So that's something you'd probably want to fix anyway. But otherwise, if you have a reasonable server and you have, I don't know, a couple hundred thousand pages, I would not worry about a crawl budget. OK. And in another option, if we use for the same page, like example page two and Canonical is for page two, we will have a problem in Google Search Console for meta description and title tags. We must worry about them or? No, that's perfectly fine. Again, it's kind of like a warning if you weren't aware of that. But if you know that this is how you have your website set up and you think that's OK, no problem with that. OK, John, thank you so much. Sure. All right, let me run through some of the questions that were submitted. There are a whole bunch of them here, but we'll see how far we go. And as always, if you have questions or comments in between, feel free to jump on in. My client's developer duplicated content in a way that looks like cloaking. After that, the traffic just dropped critically. Is that something that would be bad? Should I just remove it? Will it return to normal? So I guess I don't really know what exactly was implemented here. If you're saying it looks like cloaking, that sounds pretty bad. Sounds like something you'd want to fix anyway. It might also be that it's just the change in traffic could also just be unrelated to that. So it's really hard to say based on just the information here. What exactly was done on the website? How exactly could that be like cloaking? For example, if you serve Googlebot empty pages and use your full page, that would be cloaking. And that would essentially result in us dropping all of the information from the index for that site. So it really depends on what exactly happened there. My recommendation here would be to go to the Webmaster Help Forum and check with the folks there to look at your specific case so that people can take a look there and say, this is technically OK, or this is not 100% great, but it's not bad enough that it would cause any issues from a technical point of view. And based on that information, you can make a judgment call a little bit better on how urgently you need to fix this. If we have a site that was hit by the May, March, 7th and 9th quality update, that's a legitimate site that's generating world-class quality content, getting lots of links, and it gets scraped. And now after the update, the other sites are raking before us. I guess kind of like, what happened? What could we do to fix that? For the most part, I think that can be kind of tricky, because a lot of the updates that we made are more around relevance, where we're trying to figure out which sites are relevant for certain queries and not so much a matter of quality overall. So that might be something where we just don't think your site is exactly relevant for those specific queries. It doesn't mean that it's a bad site. It's just not relevant for those specific queries. So that's something that I think just happens to a lot of sites over time, in that they might be really high quality content. But over time, they're just not seen as being as relevant in the overall picture of the web. So I understand this is always a kind of a tricky situation as a site owner, because you want to figure out what you can do to kind of get back into the previous situation. And sometimes, especially when it comes to relevance, it's not the case that you did anything wrong, that you need to change to get back. It's just like things have changed. So what I would recommend doing there is, on the one hand, trying to get feedback from your users to figure out how they feel about your website and to try to really get objective feedback on what you could be doing differently or what you might want to target differently or kind of set up differently. As always, with a lot of these changes in traffic, I also still recommend double checking all of the technical details. So things like, are we able to crawl all of your content? Are we able to index the content properly? All of these things could potentially change depending on even small changes that you sometimes make on a website. So I really double check those as well. John, do you have an example of that? Of which part? Of a site that would rank for something, and then it was no longer relevant for those queries. I mean, I can think of an example where we had a site and it was ranked for something completely unrelated. And over time, that corrected itself. But I'm not sure if that's the best case of what you're describing. I think that can be the case where a site kind of accidentally ranks for some really popular keywords. And you get a lot of traffic from that. And you think, well, this is awesome. And at some point, our algorithms kind of figure it out and they try to fix it. We also see this, so I don't know if this is related to the March updates. But it's something that I've seen over time in the forums as well where a site might rank for someone else's brand name, for example, because maybe they have a blog post about it, or maybe they have a page like how to log into Gmail, and then suddenly it ranks for Gmail Login, which is kind of nice because you get a lot of traffic, I guess. And people probably click on your ads, and it's kind of worthwhile for you. But from an algorithmic point of view, from a quality point of view, your site is probably not what people are searching for. So that's something that our algorithms will probably try to figure out over time, what would be the better result for a query like this, and that might be a change that would happen. So it's not that your site is worse or seen as being worse. It's just not as relevant for those particular keywords. And our algorithms change over time, and it's something where we try to reflect what the web is also looking for with regards to relevance. So these things can change even without any change on your website. Besides some more technical issues that are frequently discussed, such as speeding up a site and making it mobile-friendly, what are the most important things business owners should be working towards today, perhaps, things they're not doing so well? That's a really broad question. It's really hard to say. So one of the things that I see regularly, especially from small businesses, is that it's sometimes really unclear what the website is meant for, or what it's meant to target, what people are meant to do when they reach your website. They might have worked together with a designer to make a really nice and fancy website, but you go there, are they trying to sell me software? Are they offering a service? Do they do consulting? What is it exactly that they want me to do? It's really unclear. Or sometimes they'll have lots of fancy photos and images on the website, which are more kind of for an emotional attachment to the website, and less about the products and services that they sell. So it's not so much a matter of SEO directly, like technical things, like speed and having the right meta tags on the page. It's really more about the basics of what do you want to be found for? What do you think people will be searching for, where your website, where your business, is supposed to be relevant? And how can you make that clear? On the one hand, for users, when they go to your website. On the other hand, also for search engine crawlers, when they go to your website, then try to figure out, where can we recommend the site? So that's really one of the bigger things that I see over and over again. I see that regularly at site clinics, when we do them, even for bigger companies, you go to the website, and it looks really nice and fancy. But you have no idea what it is they're actually doing and what they would like to rank for. So describing their content as a user's model. So that's useful to know. I think my question was, I don't know what you call, or set a view of things. I didn't get that last part. So you're focus sharing it. Sorry. Oh, man, you have a really bad connection. It's really hard to hear you. If you're going to talk, sorry. Or maybe you can type it into the chat, into a comment. Maybe the connection will get better over time. You can jump in again. Hey, hi, John. On the same line, basically, you wanted to know comment. When you're saying the message is not really clear on those specific websites, so is it really necessary to give those kind of summary, or you can say a little about what services or the product is all about on the specific pages, or you want to do one on a home page. Like, I mean, we are doing certain things, and this is something we do. And it's just a good idea to present on our specific pages and telling that this is something we do, and this is part of the specific page wise. I think that's totally up to you if you want to do that on the home page, or on the product page. It's really, for a large part, it's a matter of knowing what we should be able to show your page for. So understanding and thinking about what people would be searching for or what you think your website is really useful for. So that's kind of the worst thing, thinking about what people would be searching for, or what you think about. So essentially, it's about being direct and having understanding and thinking about it so that people can recognize what it is that you offer. And search engines can figure out, where should we actually rank this content? A lot of times, it'll be easy for us to rank that site based on the business name. If you search for the business name, we can find you. But you probably want to get some kind of visitors for, I don't know, what specific activity that you do. And for that, it's not a matter of putting hundreds of keyword variations on the page. It's just a tagline sometimes, just an extra sentence saying, we do this. And this is our specialty. And we love doing X, Y, and Z for people in this area. That kind of information makes a really big difference. Also, for local businesses, what we sometimes see is they won't list their location. Like, they'll be, I don't know, a bakery that makes really fancy cakes. And trying to find their address is really hard. So if you know that they're in the city, then obviously you can find them there. But if you don't know where they're located, you can come to their website and you're like, do they deliver to Switzerland? Or do they deliver to this other city? So that's something that you can make really clear about a website, and it doesn't take a lot of work. It's not magic. You can do it with any CMS. Just putting a bit more textural content on those pages makes a really big difference. Thanks, Sean. So how many of the biggest fight we had with our mostly with the deaf people and product people, basically? And whenever we say like, it will not be that important. And it's sometimes really hard to make them understand, like, why is it something important to tell about our products and everything? No. OK, let me see. There's something in the chat as well. We changed our domain, configured 301 redirect from the old to new domain. For more than a year, Google is still showing the site page with the old domain. What can we do to solve that? I think one thing that you might want to kind of watch out for is if you're explicitly looking for the old domain name, then probably we'll try to show it to you, even if we've seen the new one. So if you're looking for something in general where your site should rank, then probably we should be showing the new one, especially if this redirect has been in place for a year now. But if you're explicitly looking for the old one, then we'll try to show you the old one, because we think this is probably what you're looking for and we don't want to confuse you with a different domain name. So that's sometimes what throws people off in that they think a redirect is not processed, but actually it is processed, and we're just trying to be extra friendly and give you the old domain because it looks like you're asking for it. So that might be something to double check. All right, in the new Search Console, there's error submitted URL mark no index and excluded by no index tag. It's pretty confusing, but I guess it all falls under error. I guess it depends on how it was submitted or found. Yes, this is something that we differentiate by source of the URL, because if we think that you're submitting the URL to us, then we think it's most likely a mistake on your side, because you wouldn't be, on the one hand, saying, here's a URL that Google needs to index, and at the same time saying, oh, but by the way, it's no index. You can't index it, actually. That's kind of almost a sign that you might have a mistake on your side. On the other hand, if it's just a random URL that we found with crawling that has a no index on it, then it's a no index. We don't really need to worry about it too much. It might be something that you put no index on purpose. And this is not so much something that we use for ranking or for figuring out if your website is high quality. Obviously, a no index page is dropped from the index completely, so it's more a matter of, this looks like you have a mistake, and this looks like we just found a no index, and that could be perfectly fine. Just trying to make it a little bit clearer for you to figure out where maybe there is a mistake and maybe everything is just normal. If a site has generic content that provides unique functionality, such as features or other interactive elements, is this enough to be considered original by the algorithms? Some webmaster guidelines hint that it might be OK, but it's not very clear. I think in a case like that, you really want to make sure that your site can stand on its own, that it has enough unique and compelling content so that people, when presented with these different versions of the content, will say, well, this is a site that I would recommend to my friends. It's really the best one of its kind. It provides the most functionality that I actually need to get my whatever it is I want to do. So that's kind of the level you should be aiming for. Instead of just like saying, let me take an extreme case, you have a bunch of affiliate feeds that lead to a site, and you just offer a price comparison between two affiliate sites. I don't know. That seems like very low effort and not extremely useful. On the other hand, if you take that to the next level and make it such that it's really high quality comparison system where you really see the differences between the different suppliers and why you might want to go here and why you might want to go there, then that's something that does provide a lot of significant value. And people will probably want to recommend that directly. So that's kind of the level that I would aim for. Also, in general, when it comes to websites and kind of the quality of the website, I would not aim for the lowest bar that you can possibly reach just to be acceptable, but rather really aim to be something that is seen as number one by far. Because only if you're aiming for actually being number one by far, are you kind of in this situation where you don't have to worry about the individual quirks of the algorithm, because things kind of fluctuate a little bit anyway. And if you're number 10 and sometimes number 15, then that could have a really big difference on the visibility of your site. But if you're clearly number one and you fluctuate between number one and number two, then it's like you're kind of in a comfortable spot, right? So that's what I would aim for there, not like the lowest bar of what you can just get away with, but really try to figure out a way to be the best by far. How to change the site's domain name correctly? The site has not been used in a long time. And as a result, Google has indexed a bad site with Google Web pages without outlinks to other sites. Now we've developed a new site, filled it with content, and want to launch it on a new domain. How do we do it right? Do I delete the old site? Do we need to change the contacts on the new site? So I guess it kind of depends on your situation here. It sounds like you've kind of left the old site more or less to get stale and obsolete, and you want to move to a new domain with essentially a new website, not really related to the old one. And if it's essentially a new website, then maybe it just makes sense to just set up the new website and leave the old one and move it at some point. On the other hand, if it's kind of an additional variation of your old website that you're putting out, then I would just do a normal site move and really set up all of those signals, those redirects, everything that we have in our documentation to do a normal site move to your new domain. In both of these cases, it's not going to be that it'll immediately jump up and be the best website in search. But over time, we should be able to process that and deal with that fairly well. Hello, John. I have a question. All right. One of our clients, they are actually shared trading pro-currency. So they have a website. So they provide both national and international shared trading service. Now, we suggest them to create two separate pages, one for the national shared trading, another one for the international shared trading. But they don't want to create a page. They want to post blog posts on this topic. So which one would be the better option, creating a landing page or posting content on blog? I think that's more a matter of just general strategy that the site or the business wants to pursue. So from our point of view, both of these variations could show up in search and could be fairly visible. I think having blog posts is a really useful way to bring content out there and to kind of be associated with the audience directly. Essentially, it's more a matter of how you want to position your website and your business. So that's more up to you. Thank you. All right. Now we have a long question about single-page apps. We have a case in which we've been called to optimize a certain website with the following conditions. The site uses AngularJS, single-page application. So when a call is issued, the framework is called, and then JavaScript pulls in the content. At the moment, it's a demo site with no index tag. Our main question is appropriate use of caching mechanisms for bots. Our tests show that response time varies between two and four seconds. Would that be a problem? Let's see. Oh, OK. Oh, this is response time. So they're basically serving a static HTML version to crawlers. I don't see any problem with that. I think that's one way to deal with this. For large part, we can render JavaScript-based pages fairly well. For really large websites with a lot of content, I realize sometimes it makes sense to pre-render them to dynamically serve static HTML version to search engines, to social media services, whatever else needs to process content from a website. That sometimes makes it a little bit more predictable what kind of happens. And for search engines in particular, it probably makes it so that we're able to crawl a little bit faster and get the content a little bit faster. So if this is something that you can set up and that you feel comfortable maintaining in the sense that you can confirm that the pre-rendered version is really equivalent to the version that a user would see, then that's something that sounds like a reasonable approach. One thing to watch out for here, especially for search engines, is that most search engines have a desktop and a mobile-type browser or user agent that they use. And if you're dynamically generating the pages for these, you need to make sure that you're serving the appropriate version to the appropriate device type. So don't just look for a Googlebot user agent and then serve the desktop version of the page to Googlebot all the time. Because when we crawl with our smartphone crawler, it'll also have Googlebot in the user agent name. But we'd like to see the mobile version of the page then. With regards to the delay, if this is a couple of seconds, that's generally OK. I would still try to find ways to improve this. So if you can do something fancier with caching to make it go even faster, that would be fantastic. Usually, the delay has more in effect on how much we can crawl on your website overall. So if we see that a page takes a long time to be served to Googlebot, then we assume that your server is kind of overloaded. And we generally kind of back off from crawling a little bit just to make sure that we don't cause any additional problems on your server for your normal users. So that's kind of where I would say, well, you want to serve these static pages as quickly as you can so that search engines don't feel that they need to back off and like, oh, I'll just crawl a few hundred pages a day from this website because it's really slow, which is probably not what you'd like. Thank you, John. This is very important for us. Cool. Yeah, I think it's one of those topics that's going to be more and more important because these frameworks are really, really popular. And you can do really fancy things with them. And getting experience on how to set them up properly is, I think, really valuable, especially for SEOs because traditionally, it's always been like, oh, this website uses JavaScript, it will never be indexed. And now you kind of have a new reality in the last couple of years that SEOs, for a large part, have been kind of slow and kind of recognizing. So getting started on this, I think, is fantastic. OK, John, where we can test the demo version? What I would use is the rich results test in the Render Search Console because what you have there is the ability to see the rendered DOM as well, so the rendered HTML. And that way, you can double check to see that things like image tags are pulled in properly. The titles are working, all of that. OK, thank you. Thank you. Sure, let's see. Question about bounce rates. I feel that Google has been focused more on user intent. And let's see. Does Google count the visitor bounce when ranking a site? I don't think Google would keep a site number one if they have a high bounce rate. So what would it be? So we do look at a lot of signals in particular when we evaluate algorithms. So if we have different algorithms that we're looking at, that we're trying out, then we track a whole bunch of user signals to see, are we getting it right? Sometimes we have to be careful with how we evaluate though because there are things like kind of clickbaity titles where people click on and they always go there, but actually it's not really that great content. So this is the kind of thing that when we evaluate algorithms we have to watch out for. On a per site level, on a per URL level, I think that's really, really hard and tricky. A lot of sites have kind of short form content, and that's perfectly fine. So if you have information that people can pick up on fairly quickly, I don't know. Maybe you have a website for an event that's happening in the near future, and you just want to get the date and time and the location out there. If people can go to your page, get that information, and they're done, it's like, that's perfect. That's not something bad that we think we should hold sites back from. So with things like that in mind, it's not a metric that you can just blindly use and say, well, it has a high bounce rate. Therefore, it must be bad. Maybe it's particularly good, because people are getting the information that they need fairly quickly. So I think this is one of those metrics that people focus on a little bit too much. And actually, from a search point of view, it's not really what you think it is. What do you think it's doing? The showing ads inside content is considered as poor quality by the Google algorithm, even though we properly label them and put them in a container so that people know that they're an ad. For the most part, ads are perfectly fine. There is the Better Ads standard, which I believe the Chrome team is working together with to try to recognize sites that are particularly problematic with regards to ads. But it sounds like that wouldn't be the case here. The other thing that comes to mind is that we want to be able to recognize or find the content above the fold for most sites. So if a user is landing on a product page or a page for an individual piece of content on your website and all they see on top are a whole bunch of ads, then that's a really bad user experience. And that's something that our algorithms try to figure out. On the other hand, if the primary content is visible to users, if there's some ads on the page, that's perfectly fine. That's not something to worry about. We're seeing a ranking difference in mobile and desktop. Is this the mobile-first indexing effect? No, that would not be for mobile-first indexing. Mobile-first indexing is basically when we index the content with a mobile device. So in a case like that, that index content would be used for both desktop and mobile rankings. If you're seeing a difference in rankings for mobile and desktop, that's for the most part just the normal differences that there are between desktop and mobile search results. It's not a matter of the content being indexed differently. So indexing is basically collecting the information from the page and storing it on our side. And then the ranking side is taking that collected information and trying to find the right order in the search results for that site. So if you're seeing differences with ranking on desktop and mobile, that would not be for mobile-first indexing. At what period of time and to what percentage of users does Google start to have issues with A-B tests, there's some areas of the website where traffic is low. So it takes a bit longer to get a good sample set. Sometimes it takes six months to get a significant sample set. For the most part, that's OK. What we really need to have is that the content is equivalent during that time. So if you're doing an A-B test and you're treating things like colors and button placement and things like that, that for us is totally unproblematic. You can run those tests forever from our point of view. On the other hand, if you're doing an A-B test where one page has just a title and the other page has all of the textual content on it, then that would be a bit more tricky. That's also the kind of thing where if we index maybe the B version or the A version of a page, and suddenly we have vastly different content available for ranking, and that's something that you'd probably want to avoid as well. We have structured data for paywalls and planning, but not yet integrated. When will Google identify your pages as cloaking? Or how long is the transition period if there is one? It sounds like you know what you need to do. So from that point of view, I kind of try to get this finished as quickly as possible. For the most part, we're aware that this is sometimes tricky to do, and we do kind of take that into account when we look at these things on our side. But I would really try to get this resolved as quickly as possible so that you don't have to worry about, will someone from the WebStream team take a look at my site and think that I'm doing something bad so that you're really on the safe side? I've seen some sites ranking, even though 95% of the content is copied. Are those sites ranking due to a lot of backlinks? Should we focus on unique content or backlinks? So I think, for the most part, the important part here is you should not be focusing on backlinks. So I just mute you for a second. So I think the important part here is you should not be focusing on some random other spammy sites and saying, oh, I want to be just as bad as them. You should really aim to be significantly better and make sure that what you're providing is not just 94% copied, if the others are 95% copy, but really that you're providing something significantly valuable and useful to the web. Where our algorithms, when they look at your site, they're not saying, oh, this is mostly bad. But look, there's this one little piece of thing or this small signal that says it's kind of OK. But rather, it should be the case that when our algorithms look at your site, they're saying, all of this stuff is fantastic. And maybe there are some things that they forgot or that they missed out. And I don't know how that happens. That's the kind of situation where our algorithms will try to rank that a lot better. Then if you're just trying to be just as bad as all of the others, then that's really, really bad strategy. So we see this a lot in the forums as well, where an affiliate site will come in and say, well, for this query, you're showing nine other cheap affiliate sites that are just scraping the feeds that they're getting from the provider. Why don't you show my affiliate site as well, which is also scraping the feeds? If we're already showing nine low-quality sites, then why would we need to show another low-quality site? On the other hand, if we're showing nine low-quality sites and we have this one really fantastic site that we could also show, then that's an argument. That's something that worked for our algorithms. And in particular, that works really well for our teams as well, where if we see this kind of situation in the help forum or somewhere else, then we can go to the team and say, hey, for this query, this site that we should actually be showing, we're not showing it all. And they can look at that and say, yeah, you're probably right. We should be showing this for this query. We should be showing this as number one, because it's clearly the best site of its kind for this query. On the other hand, if we go to them and say, hey, there are nine sites that are just doing the same thing, why can't this one guy here get a break and also be shown there? They're like, why do we need to add yet another low-quality site to the mix? It doesn't significantly improve things. There's no reason to do it. So that's kind of what I would aim for there, not just to be kind of similar to everyone else, but really to be significant. Let's see. According to the developer site, Google uses web rendering service based on Chrome 41. For mobile-first indexing, will that be changed or not? At the moment where we're still using this, I expect over time that we'll switch to kind of a more modern Chrome release schedule. But that's probably still a little bit out. For mobile-first indexing, it'll continue to use kind of the Chrome 41 setup. With regards to page speed, it seems a lot of SEOs and developers get hung up on metrics like time-to-first byte or DOM complete, possibly because it's easy to report progress. Any comment on that? With regards to speed, we do look at the overall picture to try to figure out what really is happening here with regards to speed. In particular, with the changes that are happening, I think in June or July with regards to mobile speed, we try to look at a number of different metrics to figure out what is actually relevant here, what isn't so relevant here. Because we know that people like to focus on individual numbers and then try to find ways to optimize those without actually improving things for users. So we want to focus on a variety of different signals to figure out what is actually relevant for these individual sites. I think using tools that pull out individual metrics is a great way to recognize where low-hanging fruit might be, where you can make a significant improvement on your site's speed in an easy way sometimes. It's also a really useful way to kind of monitor if your site is still in the range where you'd like to be. That's also really important to do, that you have some kind of automated monitoring setup that tells you when changes that you've made on your site have resulted in significantly slowdowns. And for all of these, you might use any one of these metrics. You might use multiple tools that look at multiple metrics. That's really up to you. Does HTTPS count as a ranking signal if the website has implemented HTTPS but is using an exploitable Cypher, for example, is vulnerable to the open SSL padding oracle vulnerability? Yes. So if we can recognize that your site is using a modern HTTPS certificate and that works on a modern browser, then for the most part, we'll accept that. And we'll kind of use that as a signal that we should be indexing the HTTPS version of your pages. And once we index the HTTPS version of your page, then that's kind of what we look for there. And the ranking signal is not something where sites jump up to number one from number 10. It's really a more subtle signal that we use mostly as a tiebreaker in situations where all else might be kind of equal. So that's kind of there. With regards to exploitable setups on a server, at the moment, at least, we don't differentiate between the exact type of HTTPS setup. We don't differentiate between the exact certificate type that you have or the certificate validability period, anything like that. We just try to see if there's a valid HTTPS certificate or not. Let's see. How does Google treat sites that are created on the basis of a marketplace? The site is hosted on its domain but has the same pamphlet and through a link to the home page to the marketplace itself. Does this affect the ranking of the site in the market? I'm not really sure how you mean marketplace. It sounds like this might be an affiliate site where you have essentially the same content with your affiliate links and you're guiding people to the same content. In a case like that, I'd really make sure that your site has significant, unique, and compelling content so that we can actually recognize that there is a reason to index this page separately. There's a reason to show this page in the search results separately rather than us to look out and say, well, it's actually exactly the same as the primary site as well. So we don't really need to index both of these. I have a site with very long content. When I use Fetch's render, only one-third of the page render is for Googlebot and users. Would this affect ranking? Or is this something with the scroll bar link? I've seen a few of these cases and it's mostly just the testing tools have some limits with regards to what they think makes sense to show directly in the tool. And for the most part, that seems to be what people are running into. A simple way to double-check indexing is to just take a snippet of text that's only on the lower part of your page and just to search for that. And if your pages show up for that snippet of text, then obviously we can index the page that far. May business review markup star ratings be applied to all pages of a website. Structured data should be specific to the primary topic of the page. So if the primary topic of your page is something that people can review, for example, if you're selling a product and people can review that product, then sure, you can put that on all pages that have products like that. On the other hand, if the markup is specific to your website in general or specific to one product and not all of them, then that's something where putting the same structured data across your whole site would be wrong. So it depends on what you have there. I'm trying to move my website to a different domain. The Google forum is now responding, and we can't move. So you're welcome to kind of drop me a link to your thread, or if you have a thread in the English forum, I can double check there as well. But in general, site moves are really kind of unproblematic in the meantime, in the sense that we can process them fairly well. If you set up the appropriate redirects, if you follow the instructions in our Help Center, then for the most part, these are pretty uneventful. They should just work. How does one optimize to get into software carousels that show up above the search results? Where does Google pull these results from? These show up for all kinds of queries. So I think there are two main things there. On the one hand, there are kind of these general carousels that we sometimes show based on information we've crawled from the web in general. And I believe there are also unique kind of carousels or one boxes that we have for some types of content based on the structured data on the pages. So in particular, I believe for software we have kind of a structured data that you can apply to your pages. And with this structured data, we can understand what your pages are about. It's a little bit easier. We can recognize maybe images on the page a little bit easier, and we can show that content as matching into this kind of group or category of content a little bit easier. So that's kind of what I would aim for there. Make sure that you have the right structured data so that everything kind of aligns it. All right. We're running low on time. What else can I help you all with? Hi, John. Hi. So I found in the White House reports the widget from Google Chrome that it takes the font size on mobile devices. And I wanted to ask you, how important is that for SEO or is it just for user experience? From user experience perspective, I'm cool. Ben, mostly for user experience, we do take that into account for mobile friendliness. So I don't know if the threshold is exactly the same as in Lighthouse. But with the mobile friendly test, we do try to understand how much of a page has a small font size that can't be read. And if that's a lot, then we wouldn't treat that page as being mobile friendly. I don't know if the thresholds are exactly the same with the Lighthouse test. The Lighthouse test is really more oriented for user experience. Is there any specific metric of the font sizes, something like a standard for all kind of mobile devices for the I mean for other things? I thought we had something documented for the mobile friendly sites section and the developer docs. But I'm not 100% sure. I double-checked the developer documentation for mobile friendly sites. I believe we have something there. So I recommend if you go to the developer, you can just check it. OK. Yeah. OK, thank you. Thank you. Hello, John. Hi. Hey, John. So I have one question regarding this Fetch as Google. So I have a few hours when I do this Fetch as Google. I see sometimes my render view, it's coming, not showing my full render and rendering of the page. But for some of the URLs, it is showing full render. So just wondering why is it happening for some URLs, it is not showing your full render and for some URLs, it is showing full. So how do you mean not showing the full render? What is it? So there's two. I mean, I'm getting these two view. One is a Googlebot and user. Basically, you want to scroll down to that particular window. So I only see one point of time. I mean, it's not showing all of the data. But when I'm checking for some other URLs from the same website, but it's showing for the full width kind of data. So that's why I'm wondering why is it happening. So it's rendering the view, but not all the way down? Or not? Yes. Yeah. I assume that's just the limitation from the tool with regards to how high of a viewport we show in that screenshot. So that, from my point of view, wouldn't necessarily be a bad sign. Again, I would just double check the content and try to find a snippet on the lower part of the page and search for that. And if you can find that in search, then that's perfectly fine. All right, fine. Thank you. Hello, John. Hi. Hello. So I want to ask one question about HTML validation issues. Does it really affect site quality? I mean, we know that most browser knows how to, I mean, correct this problem. But what do you think if you have a computer and you have a lot of HTML errors? For the most part, it doesn't matter. So we can deal with a lot of broken HTML because most of the web has broken HTML. You're not alone. But in particular, for structured data, if you want to mark things up on a page, it helps a lot to have clean HTML because then you can really say this is the section that has the title and this is the image and this is the reviews for this product. That makes it a lot easier if you have clean HTML because you don't have to worry about the HTML breaking the structured data apart. But for the most part, for normal content pages, if it works in a browser, it should work for Googlebot. Thank you. Hello. Hi. Hi, John. Hello. Yeah, go for it. Yeah, OK. John, I have a couple of questions. Do canonical tags pass link views from one page to another page like Redex do? Sorry, I didn't get the first part. OK. Do canonical tags pass link views? Yeah, yeah, yeah. Pass link views from one page to another page like Redex do? So with the canonical tag, you're basically saying that two pages are the same and we combine all of the signals that we have for those pages. So that includes things like page rank, which people usually call link views. But yeah. All right. OK. Like let's say page A has canonical pointing to page B. So I suppose page A will get indexed, right? Usually, but not always. So when we have this situation with multiple pages that have the same content, we use signals like the canonical tag, like redirects, like internal links, like site maps to figure out which one of these, A or B, should be the one that we keep. So if everything tells us that A is the one that is the canonical that we should keep, then we'll use A. If there's a mix that some signal saved this one, some signal saved that one, then sometimes we'll use this one and sometimes we'll use the other one. It's not 100% guaranteed. All right, all right. The other one, if we are not able to see the caching of some of the pages are not showing their caching, so what might be the reason for the same? That can be completely normal. That's something where our algorithms or our systems sometimes just show don't have a cache page that we can link to directly, and that's completely normal. OK, OK, great. Thank you. John. All right, go for it. Is there a value, let's say you have a site and it's pure content, it's very sterile. Is there a value in adding imagery just to make it kind of easier for the users to read and assuming that adding the imagery is going to make them want to read the articles, if that makes sense? I think there's value in doing that from kind of the long term point of view. It's not the case that we would have any kind of CO signal saying there are images in this article, therefore it must be better. Obviously, if the images are things that you want to have indexed for image search, then by having them in the article, we can pick them up. But just by putting images into a textual article doesn't kind of make our algorithms think more of the page than without those images. Right, right, I understand, thank you. All right, we're a bit out of our time. Let's take a break here. It's been great having you all here. Lots of cool questions and comments in between. I hope to see you all again in one of the future Hangouts. I'll set the next ones up probably later today. It's a bit tricky with the timing, but I'll figure something out. And again, wish you all a great weekend. Thanks for dropping by, and have a good day. Thanks for these 10 minutes more, John. Thank you, we appreciate it. Very welcome. Have a great weekend. Bye, everyone. Bye. Bye, John.