 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst at Google here in Switzerland. And part of what we do are these Office Hours Hangouts with webmasters and publishers from around the world and any web search, website-related questions they might have. All right. As always, there are a few questions that were submitted. But if any of you want to get started with the first questions live, feel free to jump on in now. I've got a really quick question, John. Does it, with your header tags, H1s, H2s, H2s, does it matter what order it? So for example, on the code, if maybe I've got a couple of H3s, then it starts H1, H2, H3. Does that matter? Is that important, or do you need to have that logical structure of H1, H2, H3 throughout every page? That doesn't really matter. So we use the headings to understand the context of the content on the page a little bit better. And for that, we don't need a strict order of the heading tags. And sometimes you have multiple H1s. Sometimes you have, I don't know, the order is slightly different because of your template or something on your page. That's perfectly fine. Do you use header tags? Or certain header tags. So for example, is a H1 more important than a H2, a H2 more important than a H3? Or will you treat all header tags equally when understanding the page's content? I don't know for sure. My guess is we would take the H1 and say this is probably the primary heading on the page. But we have other hints that give us a little bit of information about that as well. So I suspect it's like this tiny amount that's just like for the H1 where we say, oh, if it looks like a primary heading, probably is a primary heading. Maybe we should assume that the rest of the page is kind of about this topic. OK, awesome. Because if a page I'd just say if it had a H2, then I would generally recommend having that as a H1. And I think that's probably the best recommendation really. So it seems, at least, anyway, if it's a quick focus. I think that's a good recommendation. I wouldn't expect it to make any big, significant changes in search, though. OK, cool. That was it. Thank you. Sure. All right, any other questions from any of you? What else is on your mind? I'll ask a question. Go for it, Rob. Within Webmaster Tools, there's the markup tool. I can't remember what it's called, the highlighter. Data highlighter, is it? So if you're trying to tag a product with price, review, et cetera, et cetera, once you get to the final stage and it's kind of accepted those tags, when it then starts to go through the rest of the similar pages, it's having a problem because our site is a flat architecture. So it thinks basically every page is a product page, but then can't find the same tags. And we've got whatever it is, 200 categories, 1,000 blog pages, et cetera. So what should we do? Or should we just ignore the data highlighter and use the normal markup anyway? I think if you can use the normal markup on the page, I would definitely go for that. So I wouldn't restructure a site to work with the data highlighter because with the same amount of work, you could just use the markup directly. And that way, you don't have to worry about any system trying to figure it out for you. So it does exactly the same thing, anyway. OK, fair enough. All right, Nick, I think you had a question. Yes, I had a pretty extensive or complex question that I posted. It's revolving around the update for a website that I did for my website. I had a pretty long static page. Sorry, first, let me go back. I did the update about a month ago, a little over a month ago. And since doing the update, my traffic has been basically cut in half. And it continues to slowly decrease over time. So every day, I seem to be having fewer impressions and page views. The initial page, the initial website, was a static website with several million pages. And each page was its own static HTML. It was a very long page, very relatively large pages. And so in the new layout, what I've done is I've split up those pages into essentially what works out to about four pages, where I have a main page and then three subsections. And the subsections are fed in using XHR requests or Ajax. So it's basically like a single page app, except that it's only for that one page. And I'm using push date to update the URLs. Each URL has its own page. So if a user were to type in the URL, they would get the page, the section of the main page, and the subpage for that URL. And in addition to that, those pages are also, there's an app canonical of the page. So if the user goes in an app version, he'll get basically an app version, which is exactly the same page minus whatever JavaScript code I need to do the single page app type functionalities. And so in the app version, the links will just link to the static links that would bring it and so on. Anyhow, originally my initial impression was that I was afraid that somehow the links to the subpages weren't being indexed, because I didn't see them being crawled in my logs. But since over the month, those have begun to be picked up. And I'm seeing a certain amount of traffic going to those pages. So now I'm under the impression that maybe what it's happened is that I've taken all the data that was on one page, split it up into basically into four parts, and I think I might have diluted the value of the initial pages, the main pages that are indexed. And I'm wondering if I could use rel previous and rel next for these pages to somehow consolidate them to let Google know that this is essentially one page and that the main page is probably the key page in this case. Yeah. I think there are a few things going on there which make it really hard to figure out what the exact source of the problem could be. So you're saying this is a pretty big site and you did this across a bunch of pages or was this on one particular page? I redid the site from A to Z. And I flicked the switch on one day about a little over a month ago. It was done. Everything was working. Everything was tested. I flipped the switch and went live with the new version. OK. And how many pages approximately did you split up like that? Well, the exact number is hard to know for sure. But it's in the wrong? Or is it? It's a lot. It's in the millions. OK. It's in the millions. OK. So I think there are two things probably going on here. On the one hand, you're splitting up one perhaps strong page and taking that and making four smaller parts of that, which on the one hand, that can work. But it also means that we need to have the full context within each of the individual parts. So sometimes what happens is the value of the big, longer page is partially because there are so many different aspects on that page that make it useful for a variety of people. And it's kind of like one stronger page for kind of a bigger group of a topic. Whereas if you split it up into smaller pieces, then those smaller pieces themselves might not be that clearly kind of targeted where we can really recognize actually all of the information is here and they can navigate to the rest of the site here. So that might be something that might be playing a role there. In general, something like this is something I tried to test before like switching it across the whole website. Obviously afterwards, that's probably easier to kind of see. But it's something where, especially when you're making bigger changes across a website that could affect search like this, then taking a kind of significant sample of your pages, making those changes, and seeing how that's reflected in search, that's usually what I would recommend doing there. The other thing is if these pages are now essentially single page apps, I assume the primary content of the page is that loaded statically or is that also loaded dynamically? Well, the way I have it structured is that the pages are based on an entity and then there's sub information for that entity. So more detailed. Kind of you can think of it as, say, for a sports player, like a baseball team or whatever, you'd have the description and then you'd have certain stats underneath it. And so the index page appears when it's loaded statically and then the sub statistics are then added on using the HX request afterward. So if the main content is loaded statically in the sense that it's returned in the HTTP request and on the page directly, then for the most part, that would work from kind of a JavaScript indexing point of view. That wouldn't be a problem. What usually ends up being more of an issue is that anything that requires rendering sometimes has a bit of a lag, which could be a couple of days, for example, for us to actually render the page after having crawled it. And when it comes to a larger site, that also means it takes a couple of days for us to pick up that link to the next page. And then from there, if we have to render that page, get into the next page after that. So for sites that are really large or that are changing very quickly, that's something to keep in mind. If you kind of migrate to a JavaScript setup, then all of the crawling and the kind of indexing following through the links, that will be a lot slower than before. So that's one thing where I'm not completely sure that that would be playing a role in your case. But it's something to kind of keep in mind. So for example, if you were to switch everything over to being JavaScript based, like the full content, on the primary content as well, then that would definitely be playing a role. For smaller sites, for sites that don't change a lot, that's less of an issue. But when I think of a site with like a couple million pages, probably you have a couple of thousand pages coming and going every week, and this kind of additional crawling activity that's required to kind of maintain the stable state in indexing probably is not so well suited for a JavaScript site at the moment. The site is actually very static. The data doesn't change. In fact, the data is identical to the data that was there before. It hasn't actually changed. It's just the layout, let's say, that's changed. The other thing is that if the URL, in other words, when the user clicks on the button to see the next section and the URL is changed, if you go directly to that URL, that URL is essentially statically served on the first impression. In other words, the first impression, right now the way I have it set up, is that static. And then if you want to add stuff then afterwards into that, those additional calls are then added through the Ajax, but the initial pages. And that allowed me to do the AMP version as well, because the AMP is then static. And in such a manner, I don't need to have the Ajax there to be able to do the AMP. So all the pages have, let's say, a static version, which is also an AMP version, and the dynamic capability that has the JavaScript. That sounds pretty good. So that would make me more comfortable with regards to the JavaScript part. One thing sometimes that people miss is within the JavaScript site, the navigation itself should also be with normal link elements, so 8 elements in the HTML. You mentioned you have a button. Maybe it just looks like a button and it's actually link. But if it's like a div and you have an on-click event there that actually does the navigation to the other URL, that would be something that we probably miss from crawling. So I just double check to make sure that those are actual A elements. It's an A element. And then if I have an event listener, if the user clicks and the JavaScript is on the page, the event listener prevents the default functionality of it and so on. So also if the user doesn't have JavaScript enabled or is using a browser that doesn't take the JavaScript as I wrote it, it will go to the next page despite the fact that it's loaded. So I think I've handled that. Okay. Yeah, I think from a JavaScript point of view, that sounds pretty comfortable to me. So I think that part wouldn't be too much of an issue. When it comes to JavaScript, what you'd probably see is that the pages, I assume the additional pages will still take a little bit longer to actually be indexed, but once they're indexed, it sounds like they'll be stable. So it's not the case that you kind of need to kind of keep up with indexing new content, which means that probably this would settle down from a JavaScript point of view, usually within a couple of weeks. So I feel pretty comfortable in that regard. I assume the changes you're seeing are then mostly due to kind of the content being split up onto multiple pages. So would the rel next and rel previous help to mitigate that to some extent? It could slightly, but probably wouldn't have a significant effect. In the sense that we would understand that these pages belong together, which we probably already see from the kind of internal linking on those pages. But I don't think you could cause any harm with that. What I don't know, what I might consider doing is just double checking it from an AB testing point of view, regardless, even though it's kind of like you already moved, maybe take a small section of your site and put that back onto single pages, essentially, and see if you can measure a difference between the search traffic going to the single pages versus the search traffic going to the split up pages and kind of try to figure out, is this something that works for me or is this something that actually causes problems? Is kind of a difference in search maybe irrelevant and maybe just a site overall has just changed in search over time, that might also be an aspect to kind of try out there. Okay. Thank you. Sure. I have another question, but I'll leave it till after results of our next roll previews. All right. Let's see, there's a question in the chat as well related to heading tags as well. If the main heading is H2 and all other subheadings are H4, H5, is it fine from Google's view to use tags this way? Yes, that's fine. There is definitely no penalty or any kind of a manual action or I don't know, a picky algorithm that says that you have to have exactly the right order and exactly all of them if you use any of them. You can kind of pick and choose. Headings help us to better understand which parts of the content belong together. So if they're under the same heading, that's essentially what we're looking for. It doesn't really matter that you have exactly the right numbers and all of that. All right. So let's take a look at what was submitted. Does Panda use a machine learning algorithm classifier? For example, feed the classifier a bunch of sample pages and sites to train the difference between high and low quality content. Maybe it's too complex for the current algorithms, might be an approach for the long run. I don't actually know what the Panda algorithm uses. So I can't really answer that question for you. In general, we do use a lot of machine learning to try to better understand how we should crawl index and rank pages. So that's something that wouldn't be totally out of question, but as far as I know, or I don't actually know what the Panda algorithm does. There, so that's kind of something where I think machine learning has a lot of potential to try to understand pages a little bit better. It's not like an automatic solution, though. It's not that we can just feed it a bunch of pages and say, these are good, these are bad, and then it'll figure out the whole rest of the web on its own. It does take quite a bit of work to actually make machine learning work well enough. Can AMP increase rankings? According to Stone Temple, one site saw a lot of kind of a traffic lift and another site saw some more traffic lift. Could it be that AMP increases rankings? I don't know. As far as I understand, AMP would not increase rankings, it just shows your sites slightly differently. So of course, depending on how you track the rankings of your site, if your site is shown, for example, in the top stories carousel, then that might be something that looks like a ranking boost there. But a lot of these search elements are available for any type of page. So from that point of view, it's not that you would automatically get a ranking boost just from using AMP. We think AMP is a great technology, a great way to make pages really fast and really usable. So I totally would encourage people to try it out, but I wouldn't do it with kind of the mindset of I will implement AMP and then I will get this automatic ranking boost in Google and all my troubles will be gone. You still have all of the normal issues around web search where you have to make a really good website. You can't work around bad content and just place it within a different kind of HTML format, essentially, and assume that suddenly this lower quality content or bad content would be seen as more important by the search engine algorithms. So I think it's important to kind of set expectations there. How unlikely is an ugly or difficult to use site? How likely could that cause ranking issues? So that's an interesting question. I think for the most part, this is something where you're more likely to see indirect effects in the sense that when users go to your web pages and they really don't know what they need to do there, then probably they won't convert and probably they won't recommend your site to other people. And then over time, this indirect effect may grow in the sense that we don't see a lot of signals that are positive for the site. We don't see a lot of links coming to this site. So maybe the site isn't actually as good as we thought before. So it's not that we have an ugly page classifier that promotes pages, which are not really that fancy or modern, but more that there's kind of just this normal indirect effects that you would have with any type of content that you put out online. That said, sometimes older content looks a little bit old and kind of maybe ugly, I guess, if you have a really old front page site or something like that, that might be something that could be just as relevant over the years as something that's completely new and modern-looking. So just because something is modern doesn't mean it's more relevant either. Question regarding rankings based on mobile phone texting. Suppose a B2B business has a desktop-friendly website with good engagement. They get 80% of their traffic via desktops, but the website is not mobile-friendly. It's not really bad, just not a good mobile site. Is it possible for them to lose rankings because of that? That's perfectly fine to have. So with mobile-first indexing, what will happen is if we see that your site is ready for mobile-first indexing, we check the number of things first. If we see that your site is ready, we will switch to the mobile version for indexing. So in a case like this, the mobile website that they have will be the one where we pull out all of the content, all of the structured data, the images, videos, all of the information that the website is providing for search. We'll take that out of the mobile version. If the mobile version is just not really that nice but actually has all of the normal content, then that's perfectly fine. There's nothing lost by switching to the mobile version in a case like that. If you have a separate mobile site, like an MDOT site and a normal desktop site, then we'll still show the normal desktop site to users on a desktop device when they search. So for the most part, if the content is the same on the mobile site, even if it's not a great mobile site, then they shouldn't see any change in search. That should just continue to work like this. That's it. I think it's always misleading to draw the conclusion that there are not a lot of users on our mobile site. Therefore, mobile is not important for us. And use that as a conclusion to say, therefore, we won't make a good mobile site. Because what could just be happening there is that people on mobile try to use your site and they see that it's really hard to use, and therefore, they'll stick to using a desktop. Or maybe they'll go away and only the desktop users will be left. So I wouldn't assume that if you have a bad mobile site and few people on mobile access it that you don't actually have a lot of mobile users. Maybe you're just sending them away and that they're actually not going to your site at all. So that's one thing to keep in mind. I see this as a problem on our site as well. When talking with the Search Console team, for example, they say they don't see a lot of people on mobile. But actually, if you have trouble using the site on mobile, then that kind of makes sense, right? Then it's kind of normal that people don't actually use it that often on mobile. Let's see. Yeah. Anyone have a question? I keep hearing a voice in between. Hello. Hi. I have a question about my site. Suppose the user open a website. Suppose search on Google related to queries about my website. And Google showing the cache version is just the instance of my website. There is an issue. See, every time Google is showing 90% showing the cache version for users. How do you mean the cache version? The Google visit thing. URL is not my main website. OK. So showing the wrong URL, the wrong address? Or what are you seeing? I mean, suppose the Google user clicks on my website. In search result, Google showing the cache version of that site, web page, instance of main page. This 90% user will get the cache version from Google. Oh, could this be the lighter mobile version? Is that possible, the web light version? No, the URL is googlevisit.com, something. I probably need to have a bit more information on that. What I would recommend doing there is starting a thread in the webmaster help forum, together with maybe some screenshots and the URLs that you are seeing. And then we can take a look there to see what might be happening in that specific case. It sounds a bit weird, but it's hard to know exactly what is happening. Hi, John. Hi. I've posted in the forum that it's around basically running in a multilingual or a bilingual country, such as Switzerland. And whether you, is there any nuances in relation to hreflang when you're running on a .ch or a Belgian domain or something like that that you could recommend, compared to saying hreflang, saying from a .de or .fr, is there any site setup that you would suggest for, say, Switzerland? You can use any of the normal setups there. I think you don't need to do anything special with regards to hreflang for Switzerland. Obviously, you have the different languages, the different language and country versions, if you want to do that. That's something to do. What I would probably really try to make sure that you have is some kind of a backup there as well, something like a banner on the page that you can trigger with JavaScript, where you can recognize maybe the user's browser is set to a different language than the page they're on, or they're in a different country than the page that's for, where you can guide them to that page, something that's like saying, hey, it looks like you speak German. Here is a German version. You can click here. Just because, especially with hreflang or with any kind of geo-targeting, essentially, there's always an option that people visit the wrong version of your page. And you kind of need to guide them to the right version of the page to make sure that you don't lose too many conversions. OK, thank you. But otherwise, just language and country hreflang should be perfectly fine. It should just work, essentially. OK, and because you kind of reduced the geo-barriers, as I would call it, where you are showing, say, Swiss content, French Swiss in France now, is there anything we can stop to do this kind of cross-border kind of surf listings? How do you mean? Well, what we've noticed is that actually, you're showing France, for instance. You're now showing French content from Switzerland. So, n.co.uk in the US and so forth. And is there anything that can be stopped around this? Because obviously, we want traffic to come from Switzerland to our Swiss site and not our French clients going to a Swiss site and then getting confused and then affecting conversion and user experience and so forth. Yeah, so some amount of that you can't avoid completely. That's why I recommend having this backup, kind of like a banner on top. What sometimes I've seen is that the individual language versions are actually the same across countries. So this is really common. I see this a lot with Germany and Austria, which both have Euro, where if you have exactly the same content on the German site as on the Austrian site, then our systems might say, well, actually, we can fold this together and we can just index one version. So that might be something that's happening maybe in Switzerland with the French for Switzerland and the French for France version, where if we can see that the content is actually the same, our systems will say, well, we'll make it easier for the webmaster and just index one of these versions, which is probably the opposite of what you want, but our systems are trying to be helpful. So what you can do in a case like that is really try to make sure that as much as possible, the content is actually different. Yeah, we actually do make sure it's unique. But then also, we don't want to get seen as we're trying to be a spammer by making unique content for the different languages. I think that's always a challenge, especially with e-commerce, where you can be unique on things like category pages and home pages, but on product pages, what are you going to do, write the Swiss slang word or something, it doesn't make that much sense. So that's something where being as unique as possible to make sure that we don't fold things together, that helps. You can double check if we're folding things together by using an info query. I don't know if you've seen that. So you can just do info, colon, and the URL. And it'll show you the URL that we pick as the economic one for the most part, which is a useful way to see if the French for France content and the French for Switzerland content is being folded together. If it's not being folded together, then essentially, the rest should continue just to work. Another thing, now that I think about it, with regards to hreflang that I've also seen recently as being a bit tricky is that depending on what scripts you run on your page, some scripts insert non-head elements into the head of a page, which could be something like an iframe or a div. We've seen scripts from DoubleClick and some analytic scripts do that, where they insert an iframe into the top of the head. And if there's an iframe there, then our systems will assume that the head is actually being closed because this doesn't belong in the head, and they will drop any hreflang links that are following that. So that might be something to double check a few of the pages for. You can double check that with the rich results test in Search Console. Well, I think it's separate from Search Console, but it's kind of like the same UI. And there you have a view code button, where you can see the code that was generated after we render the page, where you can double check to see if there's an iframe ahead of the hreflang links. OK, thank you for helpful. Cool. All right, so let's see what else was submitted here. I have a blog with good content, and people are visiting my blog daily with new information. But the thing is, the bounce rates are increasing. Is that bad, essentially? Well, I think primarily when you're looking at your bounce rate, I would use this as a way for you to understand your site better, your content better. This is not something that I will focus on for Search. But if you're seeing the bounce rate significantly change across your website, then that might be worth kind of drilling down into figuring out why might that be happening? Why are people going to your pages and then saying, well, actually, this is not as useful as I thought it would be? So that would be primarily a signal to me as a webmaster saying, maybe there's something wrong with these pages that I need to fix, or that's not working well from a technical point of view. Maybe they can't be loaded properly by these users. Maybe try to drill down and see what kind of users, which locations, what they've been searching for, where they're bouncing, and use that to try to figure out what you can change on your website in general. From a search point of view, that's not something I would worry about. We make a lot of facet combinations on our e-commerce site with no index, because it seems good for users that they can use these searches, but the pages seem to be poor in content. Is this a good way to do faceted navigation? You can do it like this with no index on the facets. The important part is just that the pages which are indexed within the category sections of your pages, that we can use those to find all of the individual products on your website. So for example, if you no index all of your category pages because you think none of these search pages should be indexed, then it'll be a lot harder for us to actually find the individual products. On the other hand, if you let us crawl all of them, then we can definitely find all of these products. But then you have that situation where you're saying, actually, you don't want all of these pages indexed. So finding some kind of a middle ground, I think, makes sense. That could be to say the different facets are no indexed. It could be to say everything maybe after page five or page 10 in the paginated list is no index. That's something which you probably need to look at on a per site basis. And just primarily make sure that we can actually crawl to all of the individual product pages. And if we can crawl to the individual product pages without running into a no index page, then essentially, we can have worked that out. We also have more information on faceted navigation in the Help Center as well as a fairly detailed blog post. So I would double check those as well for maybe some more tips on things that you could do there. When do the carousel for e-commerce AMP pages appear in France? I don't know. For a large part, we try to avoid pre-announcing things because sometimes things go wrong at last minute or something changes in policies or other guidelines that we have to kind of watch out for. So we try not to pre-announce these things too much. I do know that there are lots of people that are working on AMP pages at Google, and they're trying to make sure that these technologies are available worldwide as much as possible. So I would kind of keep pushing maybe ask on GitHub or in the forum where you can ask specifically about AMP stuff and kind of make sure nobody forgets about it. But France is a big country. E-commerce is a big area, so it's definitely not something that we're just trying to ignore. Probably something else that's making it a little bit harder than expected. Two different domains for an organization, how to know that Google knows that it's a blog of this website. So do I have to do it on a different domain? Do I put it on a subdomain? How can I tell Google that this blog essentially belongs to this primary website? Well, I think the good part is you don't need to make sure that Google knows. We can crawl and index a blog just fine. If we have links to the blog and from the blog, from your other site, then that helps us to understand that these two belong together. But it's not the case that you need to do anything particularly technical to combine these two. I know some people like to put their blog on their main website as well, on the main domain. That's perfectly fine. I think if you have content that really ties in well with your primary content on your site, that's a great way to provide additional information for your website, for your content. Some people like to do a blog more isolated from their main website, maybe more about non-business topics that they want to bring out. And maybe it makes sense to put that somewhere else. It's really totally up to you. John, just a question about that. Because you've kind of said before that, say if my website is around vegetables and I'm writing about fruit because they're kind of interdependent, then actually a blog can't be beneficial. Because sometimes you might see the blog driving the wrong kind of user experience or content experience. I don't think that would be that bad. I think it would be more problematic if it's really totally off topic. So if you have a website about vegetables and your blog is about racing cars, then it's like, I don't know, that's like a very different audience that you're talking to. I don't think you'd get a lot of value of people looking at your racing car content and saying, oh, I think I should buy some vegetables today. I mean, it can happen. But there's very little crosstalk, I think, there. So that's the kind of situation where I'd say it probably doesn't do you that much good to combine the two. But if you're kind of close like vegetables and fruits, it's like, well, there's like a logical connection there somehow. I totally wouldn't worry too much about that. OK, thanks. We have a site with large amounts of tabbed content. Should we unhide it or wait for the mobile first indexing? Gary said content hidden in tabs is OK in a mobile first world. Yes, when we look at pages for mobile indexing, we're OK with content kind of not visible by default. That's something that, especially on mobile, is really hard to do otherwise, because you have such a limited screen real estate to work with. So with regards to should you unhide it or just wait, my recommendation there is just to kind of, I don't know, I would just wait. If you've had it like this for a long time and it's already been like this for a while, then I would just keep it like that. The mobile first index is something that we've been rolling out over time. I expect sites that are kind of ready where we can tell that the text is there, the links, the internal links, external links kind of align, that the structured data works, the images, videos, that those things work on mobile, that those kind of sites will be switched over sooner than all of the rest. So it sounds like you already have a fairly good mobile site. If you're already kind of talking about mobile-oriented content, so I wouldn't necessarily kind of revert that design and say, oh, we will make a desktop-like site just to make the desktop algorithms currently there happier, I would kind of aim for the longer run picture instead. From the past two weeks, I've noticed that the add URL feature from Google is not working like it was working before, and URLs are taking more time to get indexed in Google. What could be the problem here? So in general, we do kind of, we have the system for the add URLs, which is really useful for us to get new and updated URLs. But we don't index everything that we have. And this is something that's always been the case. It's not that you can just take any random URLs and just stick them into the tool and kind of expect that they're indexed within minutes. It's something that our systems really do look at a number of kind of signals to figure out it doesn't make sense to index as quickly, or is this something that we can pick up maybe later stage during normal crawling. So that's something that essentially has not really changed there. My recommendation, if you want to kind of get things indexed fairly quickly, is just use the tool within Search Console. They kind of fetch and render, and then you can use submit to indexing there. That's a bit of a clearer signal for us that this is actually something that you need to have indexed because we can check the page beforehand to make sure that it's actually kind of a normal page. It's not a no index. That it's not something that wouldn't be indexable anyway, not an error from the server side. All of that can be checked when we do the fetch and render. So I did a link in group chat. OK. Thanks. Oh, OK. That's with regards to the issues that you're talking about. Oh, yeah. So I see the URL that you have linked there is Google WebLite. That's the kind of WebLite is a way of transcoding content to make it more mobile friendly and faster. We have a help page specifically on that. Let me just get that for you. And I can drop it into the link as well. So into the chat, the WebLite is something that you do sometimes see that we rewrite the URLs and transcode the content using the system to make it a little bit faster and a little bit more easy to digest on mobile devices that need this. There's a way to opt out as well. If you don't want to have that done, I think it's a HTTP header that you can set. And a bunch of questions frequently ask questions that are on that Help Center article as well, which hopefully gives a little bit more information on what's happening. This site is already loading 600 milliseconds. That's pretty awesome. So I would double check the Help page to kind of see if that would make sense, perhaps to opt out of this transcoding or if there's something else that you might want to do. What I would also do in a case like this where you believe that your pages are actually really fast already is maybe just send some feedback to the email address that's listed on the Help page so that someone from the team can double check to make sure that our algorithms are doing the right things there. OK, new page. I did fetch as Google for desktop, seeing the page index in desktop on mobile. In such a case, should we do fetch as Google for both or is one enough? So I assume you're asking about submitting that page to indexing afterwards. From our point of view, just submitting one is perfectly fine. We will automatically figure out how we should crawl and index this for indexing afterwards. Anyway, there's no need to submit these separately for mobile and for desktop. My website is being penalized by Google, and I requested a review a third time and still not recovered. I disallow more than 90% of my website links. What else can I do? So what I would do in a case like this is go to the Webmaster Help Forum and try to get advice from the folks there to see what else might be holding things back. So if you've already done three reconsideration requests and each time you're disavowing a little bit more, then that sounds like there might still be some significant problems that maybe you're not seeing by yourself there. So I would definitely try to get some advice from peers, from other people who've run into similar situations, who've helped other websites resolve these situations, and make sure that your next reconsideration request actually has everything covered fairly well. One thing to keep in mind is that when the web spam team sees that reconsideration request for the same thing keep coming and going, then they might assume that you're not actually willing to fix this problem. You're kind of just incrementally saying, OK, I added two more links to my disavow file. Is this enough? And in your case, it doesn't sound like that's what you're doing, so maybe that doesn't apply, but it's still something to try to avoid. So when you get a manual action from the web spam team, keep in mind that someone will manually be looking at this and that they're not going to be swayed by a comment saying, well, I added two more links to my disavow file. Is that enough now? I know there are still lots of bad ones, but I don't want to list them, because they'll take a look at this and kind of see, well, the bigger picture is still there, or you're still going out and kind of buying links at the same time. That's something to kind of watch out for. So I try to get help from other people who've run into similar situations and make sure that your next reconsideration request really covers all of the possible bases and is as successful as possible. Is it OK to block all images in robot sex file? Does it have any bad effect on search performance or is it considered cloaking? It's definitely not considered cloaking. What will happen is we won't be able to index these images for image search. So that's usually the primary thing that happens. If you don't care about image search, if your website is not kind of image focused, then that might be perfectly fine for you. Another thing that might be falling in here is if you host videos on your site, if you host them yourself together with thumbnail images of those videos, if we can't get the thumbnail image either, then we probably won't be able to index the video itself either. So if you're hosting your videos somewhere else and they're crawlable and the thumbnail images are crawlable, that's perfectly fine. If you're hosting the video files yourself and the thumbnail images are not crawlable, then we wouldn't be able to pick those up for video search either. Best practices for images on a website, what has high impact versus low impact. We're actually looking at revising some of our documentation on image search. So hopefully, we'll have more for you there. So I'd stay tuned. The documentation, as we have it now, I think is pretty good as well. So I would definitely check that out. It's in the Webmaster Help Center. It gives you a lot of tips on things like the file names, providing captions, all text, all of that. And all of those things are really kind of useful for image search. I think we looked at the Swiss website as well. I submitted a page a month ago on how come Search Console did not index my site even after a month. That sounds kind of weird. So if Search Console is not indexing the whole website after a month, that sounds like something is technically not quite OK. That could be twofold. If you're looking at specifically Search Console, maybe you're looking at the wrong version of your site in Search Console. So HTTP, HTTPS, dub, dub, dub, non-dub, dub, dub for the URL, those all matter. If you're looking at the right version of the site and it's not indexed at all, so if you can tell in Google Search by searching for the URL that it's not indexed at all, then probably there's a technical thing there that is blocking us from actually being able to crawl these pages somehow. So in Search Console, what I would do there is look into the crawl error section and see if there is something holding the website back. Another thing that could be happening is that maybe you've accidentally removed your whole site from Search using the site removal tool in Search Console. So I would double check there that you don't have any pending removals. One thing to keep in mind is that these removal tools, they remove all of the kind of different potential canonical versions as well. So HTTP, HTTPS, dub, dub, dub, non-dub, dub, dub, all of those are removed as well. So if you have a site removal request for the dub, dub, dub version of your website, then that will also remove the non-dub, dub, dub version of your website. So I double check there to see if you have anything pending. If you have something in there, you can just click Cancel. And within probably less than a day, things should be visible again. So that might actually be the easiest option to try first. I was wondering what the best solution is regarding serving internal search results. My feeling is that these should always be no indexed since they're often low quality. What could kind of be happening here? So for internal search pages, there are two aspects that play a role for us. One is that it's very easy for us to get lost in the weeds by trying to crawl all of your internal search pages. If basically any word on your site could lead to an internal search page, and we have all of these potential internal search pages, and they all serve content, then our systems might go off and say, oh, we will try to crawl all of your internal search pages because maybe there's something really useful there. So from a crawling point of view, that's probably not that optimal. That's something where you'd want to look into things like using no index and kind of the guidelines that we have for paginated content, for faceted navigation, maybe use robots text to block crawling completely of some of those pages, anything to kind of help rein in Googlebot and prevent it from going off and crawling too many crazy things. The other aspect is the quality aspects that you touched upon here as well, where sometimes these internal search results pages are really low quality in the sense that they're random words and they get indexed and there's just random products on the page, but it's not really that useful. So on the one hand, the crawling is a problem for us. On the other hand, the quality could be a problem for us as well. The quality, I think, is something that's not always black and white in the sense that there are ways that you can kind of shape your internal search pages so that they focus on important keywords so that the content that you show in the internal search pages actually makes a lot of sense in the sense that the internal search pages could be seen more like a category page rather than just a search page. So that's something where I wouldn't say it's completely black and white in that you should always block these pages, but it's definitely worth reviewing the quality of the pages that are being indexed there and to double check to see if there's something that you could be doing to make these significantly better. All right. Wow. Let's see. I think we just have two questions left. So let me just run through those. And I have a bit of time more. So if there's anything else on your end, afterwards, we can get through those as well. In a scenario where two sites have the same technical profile and content, what metrics does Google use to determine one site has better user engagement and user experience, I think this is kind of a tricky question in the sense that if the websites are the same, then they will have similar user engagement anyway. So that's something where I would personally focus more on this from the website point of view rather than from an SEO point of view. And if your website works really well, then that's probably a good sign. If you have exactly the same content as a competitor and you're just saying my user experience is better, then you probably need to make sure that it's really significantly better so that users actually understand that difference so that they actually go to your website. But from a search point of view, that's generally less an issue because we would see more of the indirect effects rather than the direct effects. A few years ago, we ran a PPC campaign on a single-page website with almost no content. We did not attract any links or anything else. Yet after getting a lot of visits to sites via ads, the site surprisingly started to rank organically for competitive terms. Can you explain what happened? Does this effect still exist today? I have no idea what happened in this case. I don't know which website that was and what was actually happening there. For from my point of view, any traffic that you get from ads is completely independent from traffic that you get from search. The placement you get in ads is something that really relies on all of the ad systems. The placement in search relies on all of the search systems. We don't have any kind of interaction there where we would say that if you buy ads, then suddenly you rank better. Or as some people say, if you buy ads, then you rank worse because we want you to buy more ads. We definitely don't have any kind of crosstalk there between the search and the ad side. So my suspicion is maybe you're just seeing some indirect effects there in that people are going to your pages and they're thinking that they're good and they're recommending them to other people. And that's, of course, something that we could pick up on. Kind of the recommendation aspect there, people linking to your pages saying, well, actually, this was a good page. I want to recommend it to my friends. And yeah. So probably what you would see there is more of an indirect effect there. But I'm not aware of any other people saying that I ran an ad campaign and suddenly my search rankings were a lot better. I think that would be kind of problematic. All right. What else is on your mind? What can I help with? We answered all questions. I don't believe it. How is that possible? I'll ask a question if you don't mind specific questions on our site. Go for it. On the American site, John, we had the same issue again in regards to, or we think it was the same issue in regards to location and IP delivery. So everything was being, or a lot of things were being indexed as if they were in California because of spidering from California. And I think it was that we added an option where you could tell us where you were with the search and the sort, but it seemed like Google was then choosing to index that. And every subsequent page seemed to then be treated as California. But I wondered if you would mind looking and telling me if that's true or not, or whether we've now fixed that issue. Sure. Maybe you can drop the URL in the chat. I can take a look. It's been that long since. Oh, man. I think it's always tricky if you have a lot of localization on the pages, then that's something that we pick up on as well. And we do crawl primarily from the US. So if there's something unique that you're doing for users in the US in some locations, then we would pick that up. Well, we just give them the option. But it's more the, you know, you can choose when you choose products in the flying category, for example, you can choose nearest me rather than price or et cetera. But I'm also right in thinking that each type, if you index one page and then you start crawling the rest, then you're crawling the remaining pages as if it's sort of incognito. You don't cookie it or learn from the previous page. Every new page you visit is essentially a fresh decision. So I thought it was a bit weird, because if you get to one page, you index it and say, by going via one of the search options because you index the dropdown, then you shouldn't then just carry on doing the rest of the site using what you've learned on page one. It should be starting from scratch. So that's why I thought it was a bit weird, but it has happened before. But then so have a lot of things to our site. So the only thing that I could see is if you keep a state in the URL or something like that. No, we don't. Does that really annoy us when other people do that? So if you get stuck in a loop, so we just don't do that. OK, if you were my taking a look, then that'd be great. Sure, sure. What kind of pages would that be? A category page, so flying, food, water, any of the main things on the nav bar, I guess. OK, cool. I dropped the URL in the thing. Yep, got it. Cool. OK, let me just send myself a note. Take a look at that. OK, cool. Thank you. That wasn't very interesting for anyone else. Well, I think these strongly personalized sites are always a challenge. Yeah, it's something because of the geographical nature of our products. We do it for users. If you're searching from California and you want to look at balloon rides, you don't want to see something in New York necessarily, although we are a gift company, so you can buy them from anywhere. But the chances are you want to see things that are closest to you. So you want to give users that option, but you don't want to ruin it for everyone else or Google. Yeah. So what I have seen some sites do is to do some of this personalization with JavaScript and to just block that JavaScript from being crawled. That might be an option. I don't know how that would work on your website in particular, and so I have to take a look at that again. OK, thanks. I'll catch up with you next time. Sure. All right. I'll let you forget. Hi, John. David here. Hi. Hi, John. Yeah, just a quick follow-up on that question around metrics. So from the same scenario, if everything's been the same between the two websites, will something like bounce rate and page view time be a factor at all for any ranking indicators? Not on a page level. So we use that more to analyze which algorithms are working well and which ones aren't. And we do that across the billions of searches that we get. But not on a page level where we'd say this one has higher page bounce rate than the other one. That's not something that we would do. Right, OK. All right. One last. Can I just stop in there? Go for it. Especially when it comes to bounce rate, and now with my AMP pages, I'm seeing an increased bounce rate as a result of AMP because each AMP session, as soon as they click on something, it shows us a new session. And so that increases the bounce rate, and it kind of messes up all the stats. So how is that handled in the whole scheme of things? Like I said, from a search point of view, that's less of a problem. That's more something that you'd want to look at from your side. I've heard the bounce rate thing from the AMP side as well. I believe there are ways that you can implement analytics tracking on AMP that kind of match those sessions together so that you don't have the bounce in the statistics because it's not really a bounce. So just going from the AMP version on the AMP cache to whatever you have on your normal site and it's not like a new visitor to your normal site. It's not a bounce from the AMP page. It's essentially just not the analytic side connecting the two. All right, so let's take a break here. It's been great having you all here. Thanks for all of the good questions. I have the next Hangout set up for, I think, in two weeks or maybe just a bit over two weeks. Right after that, there's Google I.O. as well. I'll be at Google I.O. We have a bunch of other people at Google I.O. I'm thinking about setting up maybe a live Hangout session at Google I.O. again. I thought that was always neat to have people in the room to discuss these things as well rather than just everyone virtually there. I mean, not that you all being here virtually is bad. It's just a different feeling as well. So if you're at Google I.O., maybe we'll see you there. How do you get to go to Google I.O.? I believe you have to apply for a ticket. I don't know what the process is there this year. I suspect getting a ticket now will be hard. But if you already have a ticket, then you're kind of set. All right. OK. All right. So thanks again. And I wish you all a great weekend and hope to see some of you again next time. Thanks, John. Thanks, John. All right. Thanks. Thanks, John.