 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a webmaster trends analyst at Google here in Switzerland. And part of what I do is these webmaster hangouts with publishers and webmasters and SEOs like the ones here, also like the ones that submitted a bunch of questions already. As always, I'd like to give you all a chance to ask the first questions, especially if you're kind of new to these Hangouts and haven't been regularly joining them. What's on your mind? Hi, John. I have a question about mobile interstitial penalties. All right. So first of all, is it real time, or is it updated every once in a while? And if we get hit, we have to wait a while for recovery. It's updated when we re-crawl and re-index the pages. So that's basically real time? More or less, yeah. Yeah, and that takes maybe a week or so for all the processing to take place? It's hard to say, because it really depends on how often we re-crawl those pages. So for most sites, some pages will re-crawl every couple of days, and other pages will re-crawl every couple of months. So that's something where there's kind of this flexible area. Lag. OK. And what is the, in the announcement, they said that it was a reasonable ad space, not the whole overlay, but that a reasonable banner would be allowable? Is there any specific on that percentage of page space that can be used 20%, 25%, anything like that? No. Or recommended format or best replacement that Google would recommend for moving from a overlay to a banner? I don't have any explicit percentage numbers that I can share there. So I'd recommend looking at things like the normal app banners. I believe Safari has a standard app banner and maybe doing something around that size. OK. That's what I was looking at. It seemed like those were like 20% of the screen size, maybe a little more, a little less, depending. And also, do you know if there's any side effects to that penalty that would cause a drop in targeted traffic? Because I had the interstitial on there, which I've since removed. But I noticed that before that, there was a big drop in conversions before there was even a drop in traffic. And I'm wondering if there was some kind of side effect where certain keywords were penalized more than others? Shouldn't be. So what you'll probably see is, if someone is explicitly looking for your site, we'll still show it. It's not that we can hide your site just because you have an interstitial if they want to go there. Yeah, it was organic search from Google that it was the same amount of search coming through. But it was just conversions dropped by about 35%, which was odd to me. And that was since before the rollout. So that was January 3rd. I think that started. So I don't know if that was just normal fluctuation in Google updates or what? Probably just normal fluctuations and changes. So I mean, we wouldn't be affecting your conversion rate anyway. Yeah, well, I would just want about the targeting of the certain keywords or not. OK, thank you very much. The certain keywords aspect you'll probably see with regards to branded traffic and non-branded traffic. Thank you very much, John. John, can I just ask you a quick question on that following on? All right, a quick one. Yeah, I'll be real quick. On the interstitials penalty that there's a suppression of your like, whatever you want to call it, it literally is only measured on a page by page basis, isn't it? So somebody's got a massive multimillion URL website and they have interstitials across all of those pages. Are they going to get like a compounded penalty? Or is it literally just a page gets a mark and it's the same across every page? Yeah, they don't get an accumulation because they are really bad interstitial penalty people. They're interstitial stuffers as such. We wouldn't differentiate between like, how bad the interstitial is. I don't mean that I mean, volumes of interstitials across a domain as such. No, I mean, we try to be as granular as possible. And in many cases, we can do that on a per page basis, but we can't guarantee that. So it might be that we recognize this is across the whole site here. And we will try to see it as such on the site. But it's not the case that we would say, well, this is on individual pages and across the site. Therefore, they get twice as hard. It's really either we recognize it as having interstitials or not. So you could have interstitials maybe in your support section, but then you would effectively be just deemed as this is a site with interstitials or it's not a site with interstitials, end of, pretty much. If we can recognize that as something kind of granular enough for the interstitials, then we can do that. But it's something where I wouldn't recommend doing that because you're kind of like saying, well, I know this is a bad experience, but I'm going to do it anyway on these pages that I really want to have it on. So I don't know. I'd either see it as a bad user experience and then not do it or, like I say, I don't care about these guidelines. I'll do whatever I want. OK, all right. OK, thank you. John, just very quickly on the same subject. When you're looking at the interstitials, are you looking at what's behind them when you're kind of determining the impact of them? Or are you kind of looking purely at the size of them? Because I've noticed there were already some kind of cheeky sites out there that are trying to kind of have just only half page interstitials. But what they do is they put it at the bottom of the page and then above them and make sure there's no content. They might put an ad above them that isn't an interstitial and then maybe just a page title. So they've effectively got a full page interstitial. They're just placing it very carefully to obscure any content underneath. I just wonder if Google kind of is kind of good enough to actually understand that kind of thing or it's looking purely at size. Good enough to understand all types, probably not. But we do recognize lots of different types. So that's something where we have seen sites do weird things, kind of sneakier things or less sneaky things. And we're trying to take the right action here. And we're trying to kind of tune our classifiers as well over time as we see reports from people saying, oh, you're not picking up this or you're not picking up that, that we can kind of improve that. Is there more machine learning spam? Like is it machine learning spam? I mean, that picks it up or? I don't think we'd go into the details of what we're actually doing there. Okay. Is there a cumulative side to it? So I mean, again, I've seen certain sites, very well known sites that are really trying to honor the moment where they had a full screen before and now they make it kind of half size and move it towards the bottom. Will there be a cumulative side? But if you see sites really trying to only kind of take it off and make it bigger and make it a bit smaller, will they end up perhaps with something a bit more severe than simply on a page by page basis as you crawl? No, no. I mean, we'd either recognize it as such or we wouldn't. So it's not that we would say, oh, you're trying to be sneaky, therefore you'll get twice as hard for the interstitial. But that's not something that we would do. Cool. I'm not asking for me, by the way, I have a bit of a, as you probably know, I have a bit of a problem with interstitials and it would just be obviously the, good to know what Google's doing about it. Yeah. Yeah. We don't believe you, Simon. I'm gonna have to get rid of all those interstitials and out from my side, so I don't know. All right. Let me run through some of the questions that were submitted and then we can kind of go through the other ones as well or if you have comments in between, feel free to jump on in. If you're suffering from Panda, could it be a good idea to noindex any pages that you know are not particularly well written? We have 2,000 pages and we want to kind of recover a bit quicker. So from my point of view, I'd recommend more improving those pages rather than just removing them because if they have some content that you thought was useful to have indexed, then probably it's worth having good content on those pages to the index. So I tend to more towards improving that content. If you really can't improve that content or if you really don't have time to go through that, then noindex is kind of a stop cap measure where you can say, well, I'll keep it on my site. I know I want to improve this. I don't want Google to take it into account for the moment. I have an affiliate site and a part of adding value we thought would be nice to have a price comparison on there. How can we best display this? Do we need to link to the comparison site? What's the best way to do this so that Google understands? So I think the first thing I would mention here is you shouldn't do it so that Google understands you should do it for your users. So really make sure that when someone goes to your site, they realize this is a really good site. This is something that I'd recommend to my friends, to other people that I know online. So make something that other people would recommend that they would link to rather than just trying to tweak things on the pages so that Google thinks that there's something unique on here as well. So that's something that we often see with affiliate sites, for example, that they'll spin the descriptions or spin some of the content on the page and hope that Google thinks that this is a better site because they're kind of tweaking things rather than actually improving it. And that's definitely not a good idea. If you think that adding price comparison makes sense, and if you've seen that it works well for your users, if you've done ABE testing, for example, then I would go for that. I think that's a great thing to add to a site. I try to make sure that that's not the only thing that differentiates you from all of the other affiliates of the same products, but it's definitely something that can add a bit of value. A question regarding fetch and render, we use 100% vertical height on the hero section CSS of our website, which makes fetch as Google previews look incorrect, as you can't see any of the content below. Can you confirm if this is good or bad? So from our point of view, this is probably OK in the sense that we can still render the rest of the page and kind of pick up on the actual content. From a practical point of view, this makes debugging things very hard. So that's something I might look into kind of figuring out a way to handle this so that you can actually use fetch and render to look at your pages and make sure that the rest of the content actually does work. So I wouldn't see it as a critical issue, but to make your life easier, I'd recommend trying to find a solution for that. Are AMP pages used in calculating the site quality and Panda? Are they treated like normal pages from a site quality point? So I guess you need to differentiate between AMP pages that are tied in as separate AMP pages for your site and AMP pages that are the canonical versions of your site. So if you use AMP as a way of making your site, which you can, there are a bunch of sites out there that are using AMP kind of as a JavaScript framework for making sites, which is perfectly fine. Then if it's a canonical URL for search, if it's the one that we actually indexed, then yes, we will use that when determining the quality of the site when looking at things overall. So if it's a canonical, we'll use that. If it's not the canonical, if it's like an added AMP page to your desktop site, to your mobile site, then we'll focus on the canonical when it comes to determining the quality of the page itself. So those are essentially the two main options there. One thing I'd also ask here is if you're using AMP as a separate page for your site, then I'd try to make sure that as much as possible the AMP version is actually equivalent to your main version. So avoid the situation where the AMP is kind of a trimmed-down version of your page. It doesn't have videos or doesn't have the full content because that's a terrible user experience. And I know the AMP team really doesn't like it when people serve low-quality AMP instead of normal pages. So if you have that content, then make that content shine on an AMP page. Even though we have HTTPS on an entire site, some pages are not entirely green, and our mark is insecure. How important is that for Google? Optimizing these pages is pretty hard sometimes. So from Google's web search point of view, if we index a page with HTTPS, then for us, that's essentially what we're looking for. We know it's on HTTPS. We can send people there. We can give that subtle ranking boost for HTTPS pages with regards to users themselves. Obviously, if they land on a page that has kind of this issue where you're embedding HTTP content on an HTTPS page where we wouldn't show that green badge in Chrome, then that's not really great. So that's something I'd resolve there. With regards to search, it's also worth keeping in mind that if we can access the HTTP version of the page and the HTTPS version, and if we can recognize that the HTTPS version isn't actually a real proper, well-formed HTTPS version, then we might choose the HTTP version to show in search instead. So this is something I try to fix. Yeah, go ahead. What if a website owner doesn't ever want to go HTTPS? What would happen eventually? Let's say a year for now. Suppose he doesn't convert to HTTPS until, let's say, 2018. What would happen to his site eventually? I don't know. It'd probably continue to be on HTTP. I don't think there would be anything from Google's search point of view where we would say, we're not going to show it because there's just so much of the web that is on HTTP and kind of has remained on HTTP for years and years and years. So I don't think we would force that. I don't think that would really make sense. On the other hand, this is a trend that is moving forward. So if you're actively developing your website, then I really look into kind of making sure that it's on HTTPS. One of the things there with regards to HTTPS is not that it's just encrypted and it's securely transmitted. It's also that people can't modify it on the way. So if you're on an airport Wi-Fi, the Wi-Fi network can't modify the content like inject ads in your pages, for example, which is something that is easily doable on HTTP. On HTTPS, that wouldn't work. Like if the person is blogging or something like that, right? Yeah, so in any case, if you have a business website, imagine someone else is injecting ads on your pages. That's probably not something you'd appreciate. And with regards to blogging, the thing to keep in mind there is if you log into your site over HTTP and you're on a public Wi-Fi, then you're essentially transmitting that login information in clear text for anyone to pick up. So anyone could be listening in. You wouldn't know about that. And they could be recording the username password for your site. And at some later point, I don't know, do crazy stuff. But these are all things that are possible with HTTP that are pretty well protected on HTTPS. And that's also one of the reasons why we're flagging pages in Chrome that have a password field on HTTP or that have a credit card field, because that could easily also be picked up over a public Wi-Fi. It's going to have an increasingly, an increasing disadvantage as well, isn't it? Because as the more and more sites pick up on HTTPS, it's going to be that gap in terms of rankings as well. If you just look at rankings alone, like the mobile rankings right now are separating quite a lot from the desktop rankings, because more and more people are kind of jumping on board the mobile trade, so to speak. Just going to have an advantage by nature all the time. I don't know if we would dial up the ranking change for HTTPS. That's something that we'd have to look at from a quality point of view rather than just force our opinion on the ecosystem. I mean, you can look at numbers, five-shear numbers with us. If everybody else is equal, if everybody else is equal, and then yes, it's going to make the difference, isn't it? Yeah, I mean, that's something users might see as well, where it's like everything is on HTTPS, except for this one website is like, what's up with that website? What are they doing? But that's, no, I think that's in the really long run. I'd like to ask a question before we continue with what they've asked earlier. I'd like to ask you, I have seen ranking fluctuations from the, I'm on seventh page, and tomorrow if I check, I'm on one-twentieth page, and third time nowhere in the Google Serbs, but like, when I update my content, fetch it, it again gets the seventh position back, again goes to the one-twentieth page, and again is out of the ranking results. So why does it happen, and how can I possibly solve it? Yeah, so fluctuations like that can always happen. So especially if you're in the lower part of the search results, then you can easily see bigger changes in the rankings. It's something where you really need to figure out like what it is that you can provide that's unique and really fascinating that's special for your site, where your site obviously needs to rank number one. So kind of finding that area where you're not good enough for like number 20 or number 30, but rather really you're the site that Google Search needs to show in the first position instead of all of these other ones. So that's- What can I, like if it is a fact-based website, then how can I take the rankings stable or even make it stable? Because the fluctuations are actually hard to track. If I'm on the third page and I get on the fourth, it is easier for me to track, but if I'm on the seventh page of the Google search results and I go to tenth page, it is a little harder. So what can I do with my content or even with my off page, my links that I can make it at least stable for it to rank on a specific page or have a minimal less minus effect? Yeah, I don't think there's anything you can easily do. You'll always see these fluctuations. So kind of tracking rankings is probably not so useful in a case like that because you will see these changes and people might see your site a bit higher or a bit lower, but especially if you're on the third, fourth or tenth page, then those are fluctuations that are really kind of normal and those are fluctuations that are probably not so useful for you because they don't reflect what your site is really about, kind of the rankings where normal people would see you because very few people would actually click through to the tenth page to see what is actually there. So that's something where I try to turn it around and instead of tracking the rankings, really focus your energy on making more of the website so that what you're aiming for is more like position number one rather than position number 20. John, if less people are clicking on page 10, will you eventually eliminate that with a less than half? I doubt it. I think that's something we kind of keep in there because sometimes people are curious and they're like, I really think this is the right query but I can't find a page that I'm looking for and you click through to page 10 or page 20 even to see what you're... That's a flip. You're looking for your plant. She's missing you or something like that and you're looking where she is in the world and you'll just go every single page. Yeah. There's the same results. I mean, we don't show everything. We show the first 1,000 results so there's still room for more but I think as a website, you really need to aim for the first position and make sure that everything that you're doing is in line with kind of finding one topic area or one set of queries where you really, really think you can provide something that should be number one. But that's not easy. There's no magic trick for that. I think everyone is looking for that. John, I have a question regarding the client-side rendering of JavaScript. Sure. And I've posted in the chat the link of this demo shop and it's 100% client-side rendering JavaScript. And the agency who built it said they read up that Google can read this and Google really ranked this. And it has not been indexed even in the last six weeks or so and it's registered in the Webmaster tool and we have an external side map. It's only 10 pages but Google does not want to index it. Can you say something about how well Google understands the client-side JavaScript rendering and if it's totally acceptable to have a website like that built completely in JavaScript? So I double-checked this to kind of see what was happening with this URL because it felt curious that we wouldn't be picking that up. What I noticed there is that this is essentially something that was only submitted with the Add URL tool and wasn't used from anywhere else. So from our point of view, our systems kind of treated this as some random URL that we picked up with the Add URL tool that maybe we'll index it, maybe we won't index it, maybe we'll wait and see kind of what's happened. So the main problem with regards to indexing here was really kind of like we don't know what to do with this URL. I didn't double-check to see with regards to rendering if it would work. But if you've used the fetch and render tool in Search Console, you'll see pretty clearly, is it something we can render? Is it something we can't render? I also noticed just today when I was looking at it, it was kind of fluctuating in and out of the index in that sometimes you could do a side query and trigger it and sometimes it wouldn't show it. So maybe that's something where it's kind of just on the edge, but primarily because we think this is just some random URL on the internet, that's kind of why we're not using it. OK. So you're saying Google can understand this JavaScript. And if we build a site with 10,000 URLs using this, Google will be able to index it and understand it. I can't see that. I think most likely that that'll work. But this is something where I would build up some sample site like this and try to use it normally so that it can get indexed like a normal website would get indexed and then see based on that. Well, the sample didn't work at all. It's in the Webmaster Tools very far since December, I believe, and we have submitted several sites. We've tried to fetch as Google, and I've myself submitted like 10 URLs, but none of them get indexed. And normally, if I do a little WordPress with normal HTML, that happens within the next five minutes. Yeah. So in this case, it's really a matter of our systems just saying, well, we don't know what to do with this URL. It's like some random URL that we found with the AdURL tool. We can index it temporarily with the AdURL tool, but we can't guarantee that it'll remain indexed. And sometimes it drops out like this. Is there a way to see if Google will crawl this? Because so far, I've seen no crawling effect and none of the category pages are indexed. Yeah. I mean, we wouldn't crawl it if we don't index the home page. So that's something where we first have to understand that this home page is worth indexing, and then we can pick up the rest. So double-checking just now, it looks like we can render that page just fine. I see the links to the individual books that are on there. So I'd say 99% this is something we can probably pick up. And would you still say that would be better to have a pre-rendering here or to have to build this as normal HTML, or would you say this is just as well as building a pure HTML website? Lots of arguments on that topic. So what can I say? From the search side here, when I'm talking with a team that works on rendering, they say this should be just as fine. So this isn't something where they would want people to do pre-rendering. From a practical point of view, I think sometimes it makes sense to use pre-rendering on a server side because it makes it possible for you to use all of the normal tools that you've kind of worked when it comes to SEO because anything can crawl the pages if they're pre-rendered on the server side. So that's something where, on the one hand, you can develop faster. Maybe if you're using a pure JavaScript framework. On the other hand, you can use your existing SEO tools if you're using something that's rendered on the server side. Some sites go this way. Some sites go that way. I know when looking at a lot of our own websites that we create, we tend to just make them in JavaScript. Yeah, well, you don't care about SEO. That's the difference. But we still want to have them indexed, yes. Like many of Google's sites are not mobile-enabled yet. The marketing side sometimes, yeah. OK, but I don't hear anything from you that this could be a disadvantage because so far everybody else I've spoken to said don't do a client-side rendering. Everybody else said, don't touch it yet. Because just logically, it's much more resource-intensive for Google to crawl a client-side website. I mean, you need to act like a browser out there. And of course, this is much more resource-intensive. So I thought maybe Google takes just longer time, maybe half a year, to index a few pages here. No, that shouldn't be the case. So what will probably happen is if you link to the sample shop from somewhere else so that people actually go there or that we can recognize that it's actually something that's embedded in the rest of the web, we will crawl and index that just like anything else. I think from a technical point of view, the more effort is more on your side as an SEO in double checking that all of these templates are working correctly, that we can crawl and index them properly, that fetch and render works well. Whereas with a static HTML site, you can just, I don't know, run all of your existing SEO tools over it. You know exactly what to expect. From that point of view, it's something that you kind of have to try out on your side. I think there's a big advantage for SEOs that do understand how client-side rendering works for SEO, because that's something that a lot of big businesses, they're building up their sites on this. So understanding how that works is definitely something worthwhile. Whether or not it's worth betting your whole business on something like this at the moment is, I don't know, I can't give you a guarantee that it'll always work out. OK, interesting, interesting. I actually have a follow-up question if you don't mind. This bookshop, it's actually in a provider, a logistic provider. And we will roll out 1,000 subdomains on this, 1,000. And so far, it's the same right now with the old version of this. And they have put all the books, all the product detail pages on disallow in the robots text, because they were afraid of a duplicate content penalty. And so none of the books rank. None, none of the single book is in the Google index because they didn't want it before. And I said, well, if none of the books are indexed, it doesn't matter. Why don't we just give Google all the books? And yes, we have 1,000 duplicate versions of it. But if they don't get any traffic right now, maybe one of the shops has a chance to rank for something, maybe a local book or something. And so is there a penalty if there's 1,000 duplicate versions of the same book, for example, of the same categories? For the most part, there wouldn't be. So the only kind of penalty or manual action we would have is like a set of doorway sites, which doesn't sound like this would be. This would be like just local businesses with the same product. Yes, exactly. From a practical point of view, what could happen is that our systems recognize that the exact same content is across all of these subdomains. And we'll just pick one subdomain to index. But we don't have anything to lose here, right? Because if you pick one, that's fine with me. And maybe it's one that's linked better. But you're saying, and I can quote you here, that there's no penalty if we just let Google index 1,000 versions of the same book like this one here. Sure, exactly. OK, good to hear. Thanks a lot. Hi, John. Yeah, how are you? Sorry, can I? You go first, yeah. Oh, thank you. Hey, John, I was wondering whether or not review pages are follow-worthy. My client has got like gazillions of review pages. They're all very, very similar. They just differ by the reviews. And they are very shallow in their content by themselves. But right now, they're not indexed. And I was wondering whether or not we shall have them indexed or not. I think in a case like that, there's no technical reason why not to have them indexed if your server can handle that load. From a practical reason, I think about what they should be ranking for. What queries do you think people would enter that would land on this review page and what would they be doing next? Like would they be looking for this review or are they actually looking for the product itself? And from that, you can kind of figure out, is it worthwhile to kind of invest in these pages being indexed or can I just leave them on no index and just say, well, people will be looking for the books anyway. They're not going to look for that specific review. So that's kind of like what I would look at there is what on these pages is worthwhile and how do I expect people to go there and what do I want them to do afterwards? All right. I was wondering because I would have suggested that we should just canonicalize them and point them to the original product page per se and have all the review pages indexed because, well, it won't hurt, would it? Yeah, but if you have them with the rel canonical to the product page, there are two things. One thing, the review page won't be indexed anyway, so it wouldn't rank for the review content. The other thing is that these pages would not be equivalent anymore. So you have the review and you have the product and our algorithms might look at that and say, well, these are different pages. The canonical is probably a mistake. I will ignore it. So it's kind of like the situation where you're not sure what the final outcome will be, where if you have a clear understanding of what you want to achieve with that, you can make that clear to Google and say, yes, I want these indexed or no, I don't want these indexed. So basically, out of user experience, I wouldn't have them indexed, I guess. And therefore, I shall stick with that, shouldn't I? I think that sounds reasonable in the situation that you're describing. It might be different, for example, if these reviews are really long-form pieces of content and they rank by themselves and people promote their reviews. That might be a different situation. But if these are like short snippets of reviews, then maybe it's worth just kind of embedding some of them on the product page rather than indexing them separately. OK, I've got two single short questions which you might be able to answer as well. I put them down in the, is that all right for you? Sure. I'll hurry up. So OK, the first thing is that my client's got a car rental comparison website and I was wondering whether or not. Like there are so many people who do their searches on our website here. And whether or not we should just have those indexed as well, the result pages of our website, of my client's website. So search results pages within your website? Yeah, like they've got a car rental. They would kind of look at it in the same way. It's like what is it on these pages that would be worth showing in search and what should people be searching for to go there? So in general, we recommend not making search results pages indexable just because they don't really add a lot of value. It's a lot easier to send people directly to the content rather than to kind of this intermediate page within your website that ends up sending them to the actual content. OK, my last question for today is AMP related. And I was wondering whether or not there are any approaches to have AMP combined with JavaScript, like for instance, calendar, booking, or such. Yes, so AMP, the AMP format itself needs to be static HTML because that's something that can be cached with the AMP cache. Within the AMP pages, you can use something like an iframe to kind of have dynamic content. That might be an option for some things. But if you have a pure JavaScript framework website, then you still need to have the static HTML for the AMP pages somewhere. So you can't just have one JavaScript framework that can be loaded as an AMP version. It really needs to have that static HTML that can be served as static HTML. OK, thank you very much. All right, let me run through some of the other questions. And then we'll have more time for your other live questions as well. We're working on a responsive website template that we'll be using display none to hide certain portions of the page, depending on the viewport. Is this a problem? Because it's not visible in the rendered version of the page. So for desktop, what will happen here is if a part of the page is not visible by default, we'll assume it's not the primary content of a page. When it comes to mobile-first indexing, so when we switch at some point to indexing the mobile version, then that's less of a problem. Because we realize that sometimes there are difficulties with regards to kind of transforming content, making it usable on mobile that results in some content not being visible by default. John, putting on that subject, there's kind of, I guess, almost three ways that you can kind of use CSS to hide content. One is that you will hide content from everything. One is that you might then use it in a responsive side to hide it from mobile. And you might also use it to hide it from desktop, depending on how you do it. So if mobile-first is going to act more in this way, whereas previously, Google will look at desktop and then pretty much ignore anything that's hidden through CSS, whereas now it's going to start looking at that stuff. How will Google differentiate them between stuff that's just hidden and is never seen? A lot of the old-style tricks people used to do, they put some text, they'd over-optimize the text and shove it off the left of the page by 10,000 pixels. So will Google presumably need to somehow render the desktop version of the page, as well as the mobile version of the page, in order to know what's actually hidden completely and what is just hidden on desktop on just mobile, I mean? Yeah, that's going to be hard. I don't have an answer for that at the moment. Sorry, carry on, sorry. No problem. When I do a site query in Google, are the results presented in any particular order? Not really. So for the most part, we do show the homepage on top for site queries, but a site query on its own is a very artificial query, and there's no really well-defined order within a site query. So if you do a site query and some keywords, then that's easier, because then we kind of know what to do with ranking. But just a site query on its own, I wouldn't pull any information on the order that's displayed there. I know Google doesn't take meta description into consideration at all, but why not? Both description and title are visible to bots. Why shouldn't Google use them? So I guess the main reason with regards to description is that it's not visible to users. So that's something that users themselves wouldn't see at all. And especially if you look back historically, then a lot of sites have done crazy stuff with meta descriptions. And if we relied on that, then we would get really crazy search results, I think. Whereas the title, for example, in comparison, is something that you do see on a page. You see that title in the tab, in the browser, in bookmarks. You kind of see what the title is there. So that's kind of the main difference there. With regards to mobile first, can you shed any light on when Google is going to release kind of a new blog post considering explaining all of the details? So I don't have any dates on future blog posts that are coming out. So I can't promise anything in that regard. How important is getting content above the fold? Is that a factor at all? I guess that's harder on mobile as well. From our point of view, it's less a matter of kind of having the content above the fold, but rather having something relevant above the fold. So for example, if we load a page and everything above the fold is just one big advertisement, then that's not really a good user experience. And users will think they landed on the wrong page. So that's something where at least something relevant to the page should be above the fold. Are there word limits for link titles? No. As far as I know, there are no limits for link titles, image titles, image alt text, all of that. There are no limits there, but kind of don't go overboard. Like you wouldn't with anything else on a site. Make sure that it's kind of reasonable and usable. So for example, one thing we sometimes see when we tell people, oh, H1 headings help us to understand what the page is about, then they'll wrap the whole page in an H1 and they'll format it to make it look like normal text. But they'll say, well, my whole page is the most important part on the internet, and that doesn't work. So just kind of try to be reasonable and do something that makes sense for the users and do something that actually provides some additional semantic value when search engines look at your pages. I'd like to jump in on that, because in HTML5, H1 tags aren't treated the way they were before, and therefore I was wondering how you treat this. Yeah. So what happens if there are 10 H1s? We see that as 10 H1s. So essentially, what happens there is we recognize that there are different sections on our page, and each of them have their own kind of heading on them, and we'll try to use that to understand the context of the content there. It's not the case that we'll say, well, there are 10 headings here, therefore the page is 10 times more important. It's more to understand the context of the individual sections. So that kind of falls into line with the HTML5 idea. We don't treat HTML5 in any ways differently than other HTML, so it's not that we will try to see, well, in HTML5, this is valid, therefore we'll take it into account. In other HTML, it's not valid, therefore we'll ignore it. So we'll kind of see it like any person put 10 H1 tags on the page. So nothing actually gets an advantage there, though. What does that in terms of giving the biggest signal from HTML5? Well, not in terms of, like, with regards to the rest of the internet, with regards to understanding the context of individual pieces of information on the page, it does give us a little bit more information. But it's not because it's an H1, but because we can recognize it's a heading for this piece of content. OK. Yeah, please. Probably do the same thing by using H2s or by having some other kind of heading on a page. I'm sticking with one. Yeah, that's, I don't know, that's something where some people are very picky and say it has to have only one and maximum, minimum one. And from our point of view, we have to deal with whatever is out there on the web. And a lot of times, people make interesting web pages that don't just stick to one or the other variation. I'd like to ask one more question before you continue with the other questions left by people. That is that if a site is already having a featured snippet for a given interrogative search query, then what can a SEO do to get to that spot and have his site answering the featured snippet or the question the user is asking? And that is the first question of this. And second question is that for featured snippet, what changes mean, what things you need to take care of for making sure, like basic tips for having Google to get a featured snippet and making it understand the soul of your text? Now, we don't have any explicit guidelines with regards to featured snippets, what you need to do, and how you need to mark things up for us to pick that up. We prefer to pick them up naturally. So when we crawl and index the pages that we can recognize, oh, this piece of text is actually really useful as an answer, we'll see if we can highlight that as a bigger snippet in the search results. But it's not the case that you can mark up to the pages and say, well, this is a featured snippet and this is, I don't know, 20 words, maximum, like two or three lines of text, we don't look at it like that. So we try to pick things up as naturally as possible. We've worked with the featured snippets team before to try to encourage them that we can publish something and say, well, this is what you need to do to create great featured snippets. And they're really kind of on the line of saying, well, we actually want to be able to pick up content naturally from the web. We don't want you or we don't want webmasters to artificially create pages just for featured snippets. So does anyone want to recommend? Well, anything that helps us to understand that your page best answers these queries, that helps us to understand this is probably a good piece of content. And maybe we can take some of that and show that with a bigger snippet. Do you have any feedback? If you submit feedback, because I've seen some spammy ones that you've picked up that's that I don't know. Yeah, that's something they definitely take into account. They review these submissions regularly to see are things working as expected or not. Every now and then, we'll find something really weird on Twitter as well, and we'll send that to them. They're like, oh, OK, we can fix this. But for the most part, they really want to be able to pick these things up naturally. And I realize, as SEOs, you're like, well, I will make it look as natural as possible. But I work hard so that Google actually thinks that my page is the best page of its kind. So it's hard to kind of balance the line there. But they really prefer not having explicit guidelines on what you should put on your pages so that we show them as featured snippets. All right, let me run through some more of the questions here, and hopefully we'll have some more time for you all. How does Google's approach to indexing pages that have CRO software altering their presentation and content? Which version do you choose? So I assume this is basically around personalization with regards to either location or if the user is logged in or not, and Googlebot sees the version that Googlebot sees. Basically, depending on the location that it's crawling from, Googlebot doesn't log in. So it doesn't have this personalization cookie, but it does have a location. So the version that Googlebot sees is the version that Googlebot will index, which means that if you're showing drastically different content for people in different locations or people logged in or logged out, then you need to keep in mind that Googlebot is probably seeing the default version of your page, which might have more or less content than other people see. So that's something to at least keep in mind when you're working on kind of this high personalization type website. Currently, we've been adding nofollow to PDF links to prevent duplicate content. Is this a good or bad thing? You can do this. It's not something that I'd say is clearly positive or negative. Sometimes people search for PDFs, and they want to be able to find them. It's not the case that you get a duplicate content penalty for PDFs. And it's probably rare that you have the same content ranking as HTML and PDF in the same search results page. So probably you could just link to those PDFs normally and just let them get indexed. If there's content on there that people are using as a PDF, then maybe they want to find that in search as well. So I don't have any explicit answer there. That's something where I'd probably do some A-B testing with your users directly. We're using link titles on desktop. The mouse sees that when you hover over the text, but on smartphone, obviously you can't hover over the text. What will happen with the mobile index? We'll probably keep treating them in the same way when it comes to the mobile first index. I don't know if that will change over time, but at least initially we'll index them in the same way. I have a client that owns several domain names that are parked. They currently have a single domain that they are using. If I 301 redirect all of those parked domains, is that a good thing or not, that can be perfectly fine. So what will happen in a case like that is we will index your main domain and we will index it like that. So those parked domains, since they probably weren't indexed in the past anyway, it won't really change anything. For you, from a practical point of view, it just means that you can use those parked domains to use them in advertisements, for example, where you want to have a nicely looking URL and just redirect those to your main domain so that people actually see your primary content. But from a search point of view, this probably doesn't change anything. Let's see. I think some of the other ones here are kind of similar. If I have a, if my www redirects to non-dub-dub-dub and I receive links to the dub-dub-dub version, do those links really count? Yes, they do count. So with a 301 redirect, they're essentially forwarding all of those signals to the other version. And that's perfectly fine. Yeah, so just a couple of minutes left. So maybe I'll just open it up for you all. What else is on your mind? Yeah, can I just ask? Can I just ask something? Can I just, you know, chat that was talking, I can't remember who it was. Somebody was talking about books, you know, the books with 1,000 sub-domains, et cetera. So, and you said that maybe that Google would just pick up the one and obviously filter out the others because it can be only one, really, in the index, as such. Or there should really be only one for a good user experience, et cetera. But that 1,000 sub-domains, if they're all on the one, certain one host server, you're going to kind of, in some sense, crawl like they're doing in host role, et cetera. That's going to potentially split across 1,000 sub-domains from, because it's all at the one IP address, potentially, isn't it? So it's probably not the best use of Googlebot's time. It's got like 999 other sub-domains. If you put a crawling point of view, it kind of depends on the infrastructure. And I assume that David knows what he's doing and has set up a server that can handle the load. So from that point of view, I wouldn't be too worried about like the technical crawl rate, crawl budget type questions. I assume that's something that can be worked out. Would it be maybe a better idea to have this one in that? Do you think that? Yeah, I mean, it's more of a strategy decision in a case like that. In some sites, they choose to do it almost on a per product basis in the sense that they say, well, this store is special for wooden garden furniture and this one is for metal garden furniture. So I have the same products on both stores, but they have a canonical of the wooden ones here and the metal ones here. So that's something that could be done as well. If it's just books, then I don't know if it makes sense to kind of pick a per product canonical, but you can guide this however you want. You can leave it up to Google to kind of figure out it's more of a strategy decision, I'd say. So, John, I just wanted to ask you, so converting from HTTP to HTTPS, when you resubmit the sitemap in the HTTPS version, what does it take so long? I mean, it's been like three and a half weeks for one website. I don't know. Which website is this? Is this like Facebook.com? Like one of those small websites? No, I can just make decisions. No, because it's something where, depending on the website and how often we crawl the pages, it can take longer, it can take less time. It's something where it's kind of normal to see some things move very quickly and some things just take a lot of time. And especially if you're looking at a bigger website and you're looking at like a site query, then you'll see those old URLs linger around for quite some time. No, it's just 1,000 pages. So every day, it's like a countdown. 80, 100 left, 200. Yeah, I wouldn't focus too much on those numbers, because if they're redirecting, then users will get to the right content. Anyway, it's not that users are getting stuck, that you have to force it to be HTTPS. OK. Thanks. All right. Is that time for one more quick question or not? I need to jump from the room. But maybe you could drop it in the comments. Or in the comments. I can pick it up there. All right. So thanks a lot for joining again. And hope to see you again in the future in one of the next Hangouts. Have a great day. Thanks a lot. Good seeing you faces. Bye, everyone. Bye.