 All right. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do is talk with webmasters and publishers like the ones here in the Hangout now. As always, there's time for you all to ask questions live. There are also a bunch of questions that were submitted already. If any of you are kind of new to these Hangouts and have some questions to get started, feel free to jump on in. Do we have one? I see no one else. All right. Go for it. OK, it's a bit of a strange question. Probably it was answered somewhere else, but I couldn't find an answer. A friend of mine has a site where he is basically selling access to his contact. He has a number of articles that everyone can read for free. It is cookie-based, IP-based. And everyone could read the 10 articles for free. And he would have to subscribe and pay to continue reading after that. Problem is we don't know how to handle this with Google. Of course, we could say that if it's a Google agent, then we simply show all articles. But in this case, it will be some kind of in the gray area because we would provide some content to Google difference than what we provide to a repeating visitor. So what would you recommend in such a situation? How to handle this? Yeah, there are essentially three things you could do there. The most obvious one is to just say, well, if this content isn't available to everyone, then I block Googlebot as well. Problem is that any content is available to anyone, but not all content is available to everyone. So anyone can see 10 articles, first 10 articles that they see, but this could be any article. So there are not these 10 articles that are available. There are not. OK, then that matches the first click free model pretty much. So with first click free, what you can do is let users read the first, I think, three articles for free when they visit your site from Search. And afterwards, you can show a paywall or whatever you want. You can show more than three if you want. If you want to say 10, then that's fine too. But essentially, the first three that they click on from Google, those are the ones that they should be able to have access to per day. Yes, but our problem is a bit different because Googlebot comes. And after Googlebot visits 10 pages, crowds 10 pages, on the 11th page, it will get the paywall. It is OK if we look and see if this is Google agent, so we show him something different. Yes, with first click free, you can do that. So you can show all of your content to Google, provided this is within the first click free area of your website, so things that people can look at when they come to your website for the first time. And you can show all of this content to Googlebot normally. So you can whitelist Googlebot for that situation so that it doesn't get limited to 10. OK, so it wouldn't be interpreted as cloaking or something like that. That's correct, it wouldn't be. OK, thank you. All right, let me run through some of the questions that were submitted. And it looks like we'll probably have more time for your questions as well as we get closer. When Google tests a new algorithm or ranking factor, how does Google evaluate if the result of the changes is improving search experience? That's always really tricky. So one thing I'd recommend there is, I think, like last year or the year before, Paul Hart did a presentation at SMX in San Jose, I believe, where he talked a bit about what we do when we roll out new ranking factors, how we evaluate them. So I'd try to dig up that video and watch that. That's probably the closest insight you can get to Google engineer who's working on the quality side of things. So Paul works directly on the ranking side, on the quality adjustment side of Google. And he makes a lot of these decisions as well. So we look at a number of factors in practice. Let me see. So first off, what we do is we try to evaluate how much this algorithm affects the search results. Then we work together with human raters to see does this actually provide better search results or not. We also do kind of A-B testing, or maybe we'll do 0.1% of the users actually see this new algorithm. And we try to see how they respond to that. The tricky part is, of course, that there's no simple one factor that you can look at and say, if this number is OK, then everything is good. Because sometimes people take some time to get used to something new. Sometimes people don't realize offhand what the difference actually is. So for example, back when, I don't know, maybe four or five years ago, when you would do a query for a question, you would find all of these articles that are essentially written specifically for that query. And the first time you see something like that, you think, oh, well, Google is providing relevant search results. But then you click on the article and you notice this is something that's totally artificial, has totally no depth at all, and no information at all. So just looking at the title in the snippet might make it look like it's good. But actually, the results were terrible. So these are things where you can't just look at one number and say, well, this number goes to seven. And then we launch everything. We really have to look at it over time, see how users get used to the search results, what the real difference is. So that's something where it's definitely not easy to roll out a new algorithm or a new ranking factor when it comes to Google search. John, can I just ask a question on that? That's all right, just to show my face very early here. So you get a lot of people that talk about click-through rate and bounce being factors in search results. I'm not saying that that's the case, but I'm just saying that in the SEO community, we all hear people saying click-through and bounce, et cetera. So is that maybe when you're testing things, those factors will probably come into consideration then. I know that bounce is like a dirty metric as such from your side in terms of that it's not really, the data is on somebody's site for a start, isn't it? So it's not like you're clicking through from search and it's that's more easy to measure than on a website. But is that something that maybe when it's coming to evaluation, those things would factor in? So if somebody's getting a really good click-through rate and they obviously got all the meta descriptions reflective of what's actually on the page and it's a good match, yeah? That might be seen as a good result when it being evaluated. True. If somebody, you know this concept of Pogo sticking where somebody's like, oh, somebody goes on the site and they get straight back, Pogo's sticking back to search and they're looking for the same query, would that be an indication that, well, actually, that must've been a bad result because they didn't find it. Do you know what I mean? So if somebody's looking around still, then it's much harder to find them. We try to look at that more on an algorithm level, so on a broader level, to kind of take out all of those edge cases where people look at a site and the site has really easy-to-understand content and you get the information that you need and you can leave immediately. So these are things where we're looking for an egg type thing. Yeah, I mean, these are things where we try to look at that on a very broad level, when we're evaluating algorithms, because we need to kind of understand what the general behavior is there. And these are things that can change over time. So for example, I remember when we initially launched the label for mobile-friendly sites, we tried a variety of different things, from text to symbols to like positive labels or negative labels. And pretty much what we noticed is that as soon as we made a change, people were confused and they weren't clicking on the mobile search results, even though they were on a mobile device. And it seemed like whatever we were doing was just confusing them more. So what we noticed is that depending on what we were doing, it just took a bit of time for people to understand what that actually means. So if we show a green smartphone, it doesn't mean it will call the owner of the website, but rather this is probably a search result that is relevant to you. Or if you have a crossed out smartphone, then you can understand, well, this is something that's not mobile-friendly. So maybe it's not that useful on your smartphone. But these are things that if you look at the metrics immediately after the launch, you will see all kinds of crazy things. So with regards to bounce rate or click through rate, you'll see all of this confusion happening. And this is something that just takes a while to settle down. And even monitoring how long it takes to settle down, that can be useful as well. So when we have all these things like the rankings tools that monitor for the temperature of the SERPs or rank changes across a broad number of keywords, things like that could massively impact. In terms of things bouncing around, that's what I'm saying, when you do in changes. Well, I imagine these SEO weather tools, they recognize these kind of changes from time to time as well. The tricky part is pretty much everyone, when they're searching, they're in at least a number of experiments. So there are always changes happening in the search results. There are always things that we're testing for to kind of see are these things good changes or bad changes? If we change the UI slightly and add a couple of pixels here, how do people respond to that? And that's pretty much always happening. So a lot of times when I see the SEO weather report sites like say, oh, it's like record high weather today. When we ask around, it's like, well, we're not doing anything crazy. It's just a bunch of tests that are running at the same time maybe. And maybe this testing tool is running into queries where they do see those changes. So it's not necessarily that something has actually changed, but we're always making changes. We're always kind of tweaking and testing to see what we need to improve. And that's something that I think all websites should be doing, always kind of questioning what you're doing now and thinking about what you should be doing differently. And don't just blindly charge down in new directions and say, oh, I saw that other websites change all of the fonts to, I don't know, comic science. Therefore, I will change my website, too. And it will drive more traffic to my website. These are all things that you need to test. And maybe it works for some. Maybe it works for others. You need to constantly rethink what you're providing to make sure you're on top of things that you're not just reactive to what the rest of the market is doing. Thanks, John. Cheers. Thank you. All right. Let's see. Big, long internationalization question. I managed two sites, same application, almost identical sites, different country code, top of the remains. And of October, these sites were redesigned, same CCTLDs. But information architecture changes. Let's see. We submitted site maps with hreflang tags. And December 3, the home page for the Australian site disappears from index. When you search for it with a site query, it's not there. I think this is probably a too complicated question to cover in a live hangout like this. So what I would recommend doing there is posting in the Webmaster Help Forum with the details, so with the sites that you're actually looking at there, with the titles and the queries that you're seeing, where you're seeing these problems with. And we can take a look there. Oftentimes, the top contributors in the forums, they know the common problems that some of these sites run into, and they'll be able to help figure that out. And they can also escalate that to us if there's something really weird happening that doesn't match the common pattern. So that's kind of what I do there. Let's say I have a website with a homepage example.com with my standard content. Besides, I also have example.com slash xyzabc. This URL redirects to the home page. But coming from that page has slightly different content, perhaps personalized through cookies. Since with such an implementation, we're serving Google two different content for the same URL, how can I be sure to force Google to index just the standard homepage and not the special version? So not the one that is shown with the other refer. So for Google, we index pages based on the URL itself. We don't crawl with a refer. We don't crawl with cookies, for the most part. So what would happen here is we would index that page the way that any new user would see that page when they visit that page for the first time. So in a case like this, we would pretty much always see the normal homepage content and not the personalized version. So if you're doing any personalization through cookies, through checking for the refer, through location of the user, then we wouldn't see that. If that's OK for you, if you're showing the normal content that you want to have indexed, then that's perfectly fine. If the personalized content is actually the content that you need to have indexed and to rank for that, then that's something where maybe you need to make sure that at least some of that is available on this standard homepage by default as well so that we can actually pick that up. So this is particularly common when you have websites that have highly personalized content based on the user's location, where if you go to the home page, then people from Switzerland see the content from Switzerland. People from the US see the content from the US. Then in those cases, since Googlebot is primarily crawling from the US, Googlebot will only see the US-based content. And if the Swiss content is something that you want to rank for, then you need to make sure that that's available somewhere separately that can be crawled normally as well. So a common way to do that is to say, the home page has by default at least the main primary content that you need to rank for that. The more personalized content is available under separate URLs that can be crawled separately. And any personalization that you do on the home page is kind of additional content that you don't really need to worry about for crawling indexing and ranking. So that's kind of what I would do here. So in your specific case with a refer within your website showing slightly different prices, we wouldn't know about those different prices because we wouldn't crawl with that refer. One hypothetical site sells square apples on squareapples.com. Their competitor and a bad-matter person creates a website, squareapplesucks.com, where in 10 sentences he says that square apples just suck. The site with the sucks ending has an even higher position by some rankings, and we lose a lot of users. So I guess the question kind of comes down to what can we do here, what can Google do to get rid of this other site that's so critical of our business? So from our point of view, we wouldn't take action on a site that is just negative about someone else. So that's not something where we would say, this isn't the official site. Therefore, we would not rank it in the search results. This isn't the official site. Therefore, we would remove it from the search results. If there's a legal reason for the site not to be available publicly, then you would need to get that legal information from a court order, for example, and bring that to our lawyers. And that's something that we would be able to take action on. But if it's legal content, if it's normal content that doesn't break the Webmaster guidelines, then we would keep that in the search results. If it ranks higher, if it ranks lower than the original, that's essentially a matter of normal organic rankings. So that's not something where we would take manual action on a site just because it's critical of another site. I have a website, greatscores.com, with over 500,000 pages. It's in 10 languages where there's real content. 95% displayed via HTML5 Canvas. I assume this is not crawlable. Some competitor sites have little text content but rank well. We've had years of being affected by Panda and lost over 90% of our Google organic search traffic. Search Console doesn't report manual actions. We've done exhaustive work on it. How can I work out how this apparent misbalance can be fixed? So I probably need to take a look at the details of a site like that to kind of see what might be happening there. For the most part, the thing to keep in mind is that we primarily focus on the indexing side of things. So we try to help sites get the content indexed and how it ultimately ends up being ranked when it is indexed is more a question around the quality side of things from the other parts of Google. So that's something where one thing I might look at here is how are you actually displaying this content and is this content something that you want to actually rank for? So in particular, in this case, if you have online sheet music and you're just displaying the notes, kind of the musical scores with an HTML5 canvas, and that's probably not something where we would lose any content. On the other hand, if you have the lyrics on there as well and that's also displayed with an HTML5 canvas and that can't be indexed, then that might result in your site not being able to rank for those lyrics text just because we can't actually crawl that content. So that's perhaps one thing to look into. You can probably see this fairly quickly based on the queries that you see in Search Analytics for your content. Does it include the content that you're trying to rank for? Or is it something that's kind of like the general content on the page or just the title of the pieces rather than the actual lyrics that you might have there? You can double check how we can render those pages with the fetch and render tool in Search Console, where you can see what these pages would look like to Google. Does it look like a big blocks in the middle that's empty? Or can we actually see the lyrics, for example? Whether or not we can see the musical score itself is probably less of an issue because the musical score isn't something that we'd be able to take into account for ranking purposes. Another thing I might do there is go to the Webmaster Help Forum and double check with other peers if there's something that you're missing, if there's something maybe from a marketing point of view that you could be doing differently to attract more users, to keep users better, which in turn kind of indirectly helps with your SEO as well. Because if people really love your website, they search for your website explicitly. Then on the one hand, that's traffic of people searching for your website explicitly. On the other hand, that's probably also a lot of people who are recommending your site to other people. So these are all things that you might want to look into indirectly to help improve things as well. My website is suddenly getting tons of traffic from a single source. Will it affect my search ranking? Getting tons of traffic doesn't sound like a bad problem. So at least from my point of view, I don't see how that would be problematic from a search point of view. For example, if you run ads somewhere, if you do TV ads, then it's normal that you would suddenly get a lot of traffic when people recognize your site. So that's not something that would be affecting search ranking. John, can I just step in there? Perhaps you, that answer just, actually, I'm the only one with great scores. I won't talk about that here. That's a longer question. But maybe the person getting this extra traffic is getting Ghost Span. I know my stats suffered through Ghost Span. And there are ways of dealing with that, just to tie up your stats. Perhaps you could, if that's reasonable, give an answer about that. So I assume you mean within analytics? Yeah, that's exactly what I mean. Yeah, so in a case like that, what usually happens is that someone is just manipulating the analytics script and acting like they're sending you lots of traffic. So that's something where maybe it just looks like you're getting lots of traffic, but actually nobody's actually visiting your site. So I wouldn't really worry about that at all. That's something you can probably check by cross-checking your various analytics sources that you have. So if you have Google Analytics, for example, and maybe log files from your server, you can double check to see are these numbers roughly matching up. They'll never match up completely. But if they're roughly matching up, then those numbers are probably reasonable. If the numbers you're seeing in analytics are just way off the chart and completely different than what you're seeing on the server logs, then probably there's just someone messing with your analytics. And you can pretty much ignore that. Or there are a bunch of articles out there that help you to adjust your analytics settings so that this kind of spam is filtered out there. I thought it sorted that out, John. You know, the months of last year, they were talking about we've got rid of spam in analytics. But I'm certainly seeing it again. There's one in particular called Share Button, and it's a pin in the backside, to be honest, because it just totally skews all the figures. So I don't know, are they looking at that, do you know? Because that's just the way of looking at it. Yeah, I know they're looking at it. I pass that on to them from time to time, and they're like, yes, we're working on this. And I assume they do some things, and then the spammers do some things differently again. It's kind of a back and forth. I thought it was quite a bit of a log post with tips on how to set up filters for analytics to take care of that. I haven't looked into that myself. Well, there's actually a button in the settings on a view that says filter out all traffic from spam box. It doesn't work at all. Sorry, because it doesn't work. Still there. That's because the ghost spammers are improving or changing their techniques all the time, and analytics is trying to keep up with it. And there are ways of minimising this, because I know it's annoying, and I've dealt with it on my site. It's exciting. I know that you can sell filters, et cetera, but it's just, you know, it'd be great if they could try and resolve it at the other side, so you don't have to. Yeah. Yeah, I totally agree. It would be great to kind of have this resolved once and for all, but I don't know what the actual plans are there. So I don't work directly on the analytics team. There are some great people that are working there, so I trust they're kind of trying to stay on top of this and trying to find solutions that work forward-looking as well, that don't just respond to what hackers are doing now. Thanks, John. All right. Does a domain extension matter for SEO for international targeting? For example, if I go to .xyz or .net instead of .com, can I rank the same as .com? Yes, you can. So if it's a generic top-level domain, then you can use that as a generic top-level domain. You can set geo-targeting in Search Console and use it the same as anything else. This includes all of the new top-level domains as well, including the ones that look like they might be kind of geo or regional specific. So for example, if you, I think .berlin is one where it looks like it's a domain that's targeted for a specific city, but actually from our point of view, we treat this as a generic top-level domain. So if you want to specifically target users in that country, then you need to set geo-targeting in Search Console for that specifically. It also gives you the option to use that as any other normal generic top-level domain if you wanted to do that. If you're trying to target like a, sorry, if you're trying to target like local, kind of local listings, say you've got a project that is like hotels in Berlin, blah, blah, blah, et cetera, and you go to a .com to target those locations. You've got to have content, haven't you that actually matches then in terms of locations, et cetera. So things like maps and local information and in the content, yeah. So that's a difference massively, isn't it? If you go to .com. Well, you need to have that anyway. I mean, if you have a .de to target Germany by default, then that's essentially just the geo-targeting side of things. You still need to be relevant in those areas. And if you have a generic top-level domain, you can do geo-targeting through Search Console and that works just the same as a country code top-level domain. Obviously, there's some like non-search aspects that can come into play there as well, where if users see a .berlin website in the search results, they might assume that this is really, really for that city in Germany rather than just a generic global website on that topic. So that's something that might be worth kind of thinking about. But from an SEO point of view, there's really no difference. So, you know, I've just migrated from a .code at UK to a .com on quite a big sort of location, a directory type site, yeah. That doesn't automatically mean that I'm gonna suddenly start to get a little traffic from the US because I haven't said, oh, well, this is actually UK. It's still ranked in the UK because obviously all of the content and the towns and the cities and the small suburbs relate and the maps all relate to the UK. That was my point, really. Yeah, well, I mean for geo-targeting, we take into account a lot of things. So that's something where we'll probably just get it right by default. But if you want to make sure that we really understand that it's for the UK, then make sure to have the geo-targeting setting. And with the geo-targeting setting, what usually happens is just in that country or maybe with geo-targeting in general, what happens is just in that country, we show it a little bit higher in the search results when we think someone is searching for something local. So if it's a global website and you have it on a country code top level domain, that's perfectly fine. That doesn't really change anything. If someone is searching for something global, then we probably wouldn't take geo-targeting into account as much. But if someone is searching for something local, like, I don't know, just like pizza or something like that, then we would try to take geo-targeting into account and use that information that we have to bubble up the more local results as well. But Australian electricians rank in Manchester sometimes. But actually, no, it's not so much Manchester, but anywhere that they have, because I work this out, I mean, I suppose it's not rocket science, really, because it occurred to me that a lot of the same places in Australia and the US, the smaller towns, have the same name, yeah, like Manchester is in different places. And that's why. So, paper lines that are targeted to good Manchester at a very granular level on a page, obviously irrelevant because Google's not, maybe they've just not said that it's an Australian website in their, you know, their geo-targeting. Yeah, I've seen a few edge cases like that that are a bit weird that we could definitely improve on. Mm-hmm, okay, all right, thanks. All right, if I block Googlebot for URL like slash my account or slash wish list, is that good or bad? That's perfectly fine. So if you're using the robot's text file to block access to specific parts of your site that shouldn't be crawled or indexed, then that's perfectly fine. It can happen that we still index the URL itself, but we wouldn't have the content. So it's unlikely that it would rank for anything and it wouldn't affect the rest of your website. Let's see. I have a web problem with my website's news website. In the past, it was appearing in the news box or top stories on the homepage and searching them, but at this time it's restricted. What can we do to kind of fix that? The in the news and the top stories section in the search results are completely algorithmic and they're based partially on the content, partially on the queries, partially on how we think it makes sense to show websites with that kind of top stories set up there. Some of this, I believe, also depends on whether or not you're using AMP on your pages, if it's valid AMP or not. So all of these things kind of come in together and essentially it's organic ranking in the end that kind of determines which URLs we show in which part of the search results. So specifically with regards to these kind of one boxes and different parts of the search results, it really depends on a number of different factors. So there's no one single thing that you can do to always show up there or to never show up there. And again, some of these depend on specific markup maybe or if you're using AMP, some of these do only use AMP results. So that's something to kind of double check that if you're trying to show AMP and there's something technically wrong with the way that you're providing the AMP pages, then that's something you'd need to fix too. I see a lot of international sites ranking in India. They're not relevant, but they still rank well. Google doing this intentionally or is this algorithm to show search results because it's relevant and to ignore the regional criteria. If these are bad search results, then we're probably not doing that intentionally. What you can do is maybe send me some screenshots, some queries where you're seeing this happening. Ideally these would be queries that will be kind of generic. So not like searching for the title of one specific website and then complaining that this specific website is shown in the search results, but rather something generic. So maybe a type of business, something like that and really showing exactly what you're seeing there. So ideally in the first couple of search results that we're getting something completely wrong and that's something that our engineers love to see because it really gives them something to kind of work on. So if you're seeing examples like this, feel free to send them my way. Curious about how gender comes into play with the Google search results, Google search for swimsuit list sites pertaining to women's swimsuits. Would it be negative or duplicate content to have non-gender swimsuit pages? No, I don't think that would be negative at all. I don't think that would be duplicate content. That's something which you can provide, however, the way you want to do that on your website. So that's really essentially something left to you to kind of decide. As far as I know, our algorithms don't try to skew things automatically. We try to use the same algorithms across the board. And sometimes that results in weird search results that are a bit skewed. A lot of times the feedback about these weird search results does help us improve our algorithms to make sure that we're providing a comprehensive view of the web rather than something that's skewed in one way or the other. But making content on your website that matches how you want to provide that, that's perfectly fine. I think that's a great thing to do. John, can I ask you a question on that? Sure. Which is like a follow-up. I'm just gonna paste a link into the chat. It sounds like he's talking about a similar result to this where you get a list of things when you Google something. What type of result is that? This is just looking for certain types of gifts in it. So... Yeah. I believe we'll call those the featured snippets. All right. So that's something where we try to pull out a bigger snippet from the webpage. Sometimes if we can recognize things like lists on a page, then we'll try to kind of compile a subset of those lists of those items and have the more items list link on the bottom so that people who are really interested in this content, they can go there. But the idea is that this is a little bit bigger snippet that provides a little bit more information about what's available on those pages and people who are interested in that they can actually go there to get the full view. All right, that makes sense. Yeah, it is a list. I mean, it's in a scroll horizontal scroll on that site. Okay. John, I'm really surprised that that result actually, sorry, that result is pretty commercial in nature. Just commenting on that, really. It's 100%, it's 100% commercial. But do you know what I mean by the snippets, usually? I kind of get that, but usually with featured snippets, for me, it's usually more about providing knowledge and stuff, you know, if a feature snippet comes up. For me, it's usually like, these are like popular gifts and people buy these on X-Dates versus prices. I'm not criticizing that. I'm just saying that that's quite a surprising feature snippet for me, for me. It's, I don't know. I don't find it problematic as other things that we've sent to the team, but it's, I don't know what the query was there The query was experience gifts, which is a section of, you know what we do. So it's the same sort of thing that we do. So it's basically given off a page of organic results to one out of 50 people it could have. So it is a commercial. That probably would be something where the team would be interested in hearing feedback on that. So what you can do in a case like that is use a feedback link on the bottom to let us know about that. I might end up pinging the team directly about that as well. Usually the more problematic ones, or well, this is probably kind of problematic too, but the more problematic ones is when someone is looking for something generic and they really essentially get an advertising. So in this case, it's still listing different types of things. So it's not completely an advertisement, but sometimes it kind of drifts in that direction as well. So that would be something that the team would try to take a look at to make sure that we're not pulling out content from a page that essentially turns that search result into an advertisement rather than providing like a general overview of what's available for this specific topic and kind of more or less objective information based on the query. I mean, it's one of our competitors, so I'm loath to say this, but it does kind of give people what they want. Yeah, but I don't know. So... But it doesn't give them a choice. It gives them one website which is all of their products, not a bunch of stuff. It's not, but that's almost like for me, that's like a shopping result, but without the pictures. It is, it's exactly what that is. No, it's not in the right place, if that makes sense. Me, if it's a knowledge graph, I want to answer questions and provide in-depth knowledge around important things. But a shopping result is like, this is the prize, this is the item, and that's what it is, but without the pictures and the photos or such. I get it, I want to see it like that, yeah, yeah. So in any case, these are the kind of things where if you submit feedback, it's not that we're going to just remove that or like tag your name with that, but rather the team is going to take a look at that and kind of see what should we be doing differently here. Is this just confusing for this one person because it's one person out of many who's wondering about this, or is this something that's actually confusing a lot of people? Should we kind of refine our criteria in our algorithms to avoid picking out commercial sites for crews like this? If it wasn't for people like us, though, where would you ever get that? Do you get normal people clicking on the feedback and about this result and telling you, oh, this isn't what I wanted? Because it is what they want. This is the annoying thing. I can't hand on heart, say, this is bad because even though it's a competitor, 99% of people would still get the suggestions they want. They would just be from someone other than us. So it's hard for me to complain about it other than me saying it's not fair. I would still do that, though. It's feedback that's useful for the team. So that's something that's definitely useful. And it's not that there'll be like a mark on your record and it's like this guy's always complaining about search results, but rather this is something that the team looks at objectively and it's like, well, I don't need to worry about marks on my record or I've got enough. Yeah, don't worry. Well, will that change, though? You know, that results, will that change dependent on the previous browsing behavior if somebody, say somebody's logged in and they have been looking commercially for this particular type of thing and they have intent, there's a detection of intent to actually buy, so would that result be different? Maybe, I don't know. I don't know. I'd have to look into the query and what's actually happening there, but that can happen as well. So there's always a pretty strong aspect of personalization that comes into play. I don't know if that would be strong enough to kind of change things for that specific query. So that's kind of hard to guess there. You know, the other week, I'm not gonna say which one, but I kind of mentioned like a result that I found and I was doing a bit of research for my dissertation. Looked like two sites that were exactly two pages that were exactly the same, yeah, that came up in search. In actual fact, there were differences and I kind of worked out that perhaps some reason why that may have happened was because a dot com and a dot code that UK came up, yeah, together, and it occurred to me that it might be because previously in the day, I'd been looking in incognito, because I'm an SEO and I look at rankings in incognito from time to time as we all do, but I've been looking at dot com for something, yeah, and then later on, I'd gone obviously back into my normal mode and I'd been looking in dot code at UK. So it occurred to me that maybe, and the dot code at UK page, I visited about five times previously. When I ran that result, it actually showed these two sites and I think it was working out that, well actually, does she want the dot com? Does she want the dot code at UK? Because earlier, she was looking at dot com. Is she in a different country at the minute? I mean, these things make me to play, maybe, might they? I think that's more coincidence because we wouldn't take that into account if it's from an incognito session. So that's something where sometimes you run across these uncanny coincidences, but that's really not related there. But it wouldn't pick up on any sort of browsing behavior in Unconvito at all? No. If I was logged in- Within that session, within incognito, of course, but like from an incognito session to another session in another browser or another profile, that wouldn't be connected at all. All right, okay, I'm gonna- I have one more question, John. All right. Okay, it's about a site which promotes a new game. And the thing is, it has a page, how you can help us. And it lists a few things there. One is go donate on our Kickstarter campaign and other things like that. And one of the things the guy wants to add, but I don't know if it will be in Google's guideline relation or not, would be to add a section where he says, write about us, share things about us, share news about us, write about us on your blog, give us a link from your site. Would be that in guideline relation or not. He's basically asking for backlinks, but it's not contacting someone and asking. It's if someone lands on your page, you're saying that you would appreciate if they would talk about you and link to you and- That's usually not a problem. That's something that's perfectly fine to kind of make it easy for people to link to your site that can make a lot of sense. So there would be no risk for something, manual penalty if someone was pumping or someone gets to that page and see. That would be perfectly fine. Okay, thank you. All right, is that- Yeah, go for it. Just one quick follow up to that, which might just be speculation, but previously people have said that when the Google looks for particular phrases on sites in order to find spammers and link builders. So if you have the words, you know, link to us or specifically if you is mentioned buying links or something, then you can pick that out. But are you really good at determining the people that are buying and selling links versus the people that are just asking for a link? Because that's all that is, is saying we're great link to us, which a lot of people do, versus we buy and sell links, which is a subtle difference depending on the language used. Yeah, so for the most part, we should be able to pick that up separately. So like the common situations, like if you like what we're doing, link to us or recommend us or leave us a positive review, all of these things are essentially perfectly fine. It's you're not forcing someone to do something. You're not kind of limiting the access based on what they're doing. You're not exchanging anything for those links. You're just saying, well, don't forget to kind of let us know what you thought about our content or let other people know about our website if you liked what you saw here. Okay. All right, let me run through some of the questions that were submitted quickly. We wonder about the priority of an H1 tag compared to other H tags on our page. Is it like 50%, 100%? We don't really have numbers with regards to the specific headings on how strong they're valued. So I would recommend just using them there how you would do that normally, semantically on a page and not overdoing it. And then things will kind of work out fine. We try to use the heading to determine the context of the heading and the rest of the content on the page. So there's really no value in taking the whole page and wrapping it in an H1 tag. It's really more about understanding the content on there better. With the mobile-first policy, will it affect desktop rankings? No, this is specific. Wait, wait, take a step back. Mobile-first indexing, will it affect desktop ratings? Yes, that will affect the desktop rankings as well because we will index the mobile pages and use that for the desktop search results as well. So that... Josh, can I just ask one question on that? Because I shared today Jennifer's thing about the mobile-first index in what you need to know type thing. And a few of my SEO peers have said it's not just around the corner. So how big is the corner? How soon do we think mobile-first indexing is likely to come around? I don't have any dates, so... Is it months? Is it years? We... It's not gonna be years. So I would guess, I'm just making a guess later this year sometime. But we will inform people when we get closer, when we actually have a date, so that there's really sufficient time for people to kind of resolve issues that they might need to resolve. And we'll try to also inform sites where we recognize issues. So if we recognize that maybe your mobile version doesn't have all of the same content or markup that your desktop version has, and we'll try to let you know about that through Search Console as well, so that you kind of are aware of these issues and have a bit of time to actually resolve them. So it's not gonna be like Q1 this year or anything like that? I don't know about like Q1, but definitely not the case that it's like, next week we'll be switching it on, by the way. So we're really aiming to have sufficient time to let people know about this. We're still experimenting with where the actual problems might be. So that's something we wanted to announce that early so that people are aware of this direction and that people can kind of think about their sites already in that kind of frame of mind, but not in the sense that people need to urgently change things on all of their sites. Will we get a date like we did with mobile friends? Probably, but not today. Okay, all right, so what I'm saying is will we get like a date near the time where remember when you did with the mobile friendly thing when you said April 21st, it's all gonna kick off. Like you have a date in the show? All right, okay, well that's worth knowing. Let's go. I don't know. All right, okay, thank you. Does Google place more or less value on internal links that are open in a new browser window? So with target equals blank or target equals self? And no, we don't treat them differently at all. I found this one interesting. I'm from Sri Lanka and the dominant language used here is Sinhala, although it's widely spoken, the actual Sinhala keyboard is not widely used, especially not in searches. So users are more likely to type phonetically using the QWERTY keyboard. So the question is, is it worth having a piece of content with the Sinhala content phonetically written out using the English script? Or would that just look spammy, like spammy Google book for Google? Would hreflang tags help or hurt? Technically the content is not in Sinhala and as far as I know, there's no way to define this particular case. So I asked around a bit for that specifically and that is a tricky situation. So it's similar to other cases that we've seen, for example, in Hindi, where people like to read content in Hindi in the right script, but they search in English. Or not in English, but with the English alphabet. And what we've seen is what some sites do is they use the titles of the articles, for example, with the phonetic kind of transliterated text, so that at least the titles of these articles are findable like that. We've also made significant changes in the way that we search and the way that we index the content, specifically for those users in those locations. So that's something we'll probably try to catch up on that as well, so that when people search with their transliterated content, then that's something where we'll probably still be able to bubble that up. But I see that as something more long-term than something just around the corner. With regards to HFLang, one recommendation I heard is to use BCP 49, I think, as a tag where you could do something like SI for Sinhala and then dash Latin to let us know which variation of the language that you're actually using on those pages. But that's something you might want to kind of experiment with rather than blindly implement that. Let's see, some, yeah, the other questions I probably need a bit more time to kind of dig into. What else is on your mind? Oh, I got one. Okay. So this is actually site-specific. We're working with the university in the United States and they recently launched a new program for their students and they decided to create this separate WordPress installation for that program that's in a sub-folder on the main domain. So this is actually the actual sub-folder. So we're trying to do this here for that and it's been launched for four months now. And one of the weird things is like, for most of the main keywords that they're targeting, they're not ranking at all. And I don't mean top 10, top 100. So for something like this, which is one of their main keywords, they're not ranking at all in all of the results. So I went to the last page, 700 results or something like that and they're not there at all. So it's been four months so I expected that at least they would be somewhere in the top 100 at least. So I'm not sure if there's something maybe wrong with the domain because at least on that sub-folder level seems everything to be fine. I don't know if maybe, I was thinking that since the design is different maybe Google thinks it's well might be a student just creating a website in a sub-folder and maybe we should take it from zero. So to say that that's still going to explain why they're not showing up at all. So I'm not really sure what's going on there. I don't know. I'd probably have to look into that more. So what I usually do there is to do a site query in addition to the generic query to kind of see are we even indexing that content? Is it even available? Yeah, it looks like it is. So it's, I don't know. I can take a look into that later on to see if there's something unique that you're missing out or that we're kind of getting confused about. But it sounds like, I don't know, weird situation. Yeah, yeah. I was expecting since it was indexed it would show up even if very low position it would show up somewhere but not showing up at all for one of the main keywords kind of weird, especially since the content seems to be normal so there's nothing. Yeah. I don't know. I can take a look to see what. Are you maybe for .edu domains are you seeking anything differently than you are? No, no. No, I'm looking forward for anything here. You might find it. Sure. Can I ask really quick question, John? Sure. So I'm quite intrigued at the moment with the concept of near duplicate content because I think it's kind of an area where people do have quite a lot of confusion as to what actually is near duplicate content. Obviously we know that duplicate is like, this is exactly the same output, same checksum, et cetera but it's on maybe HTTPS versus HTTP, a different domain like mirroring, other stuff that kind of thing, yeah. So it's the same output. Whereas this near duplicate content thing, as your definition internally kind of changed of that over the years, given that there's more and more now sites that do like lots of dynamic stuff. So sites that's maybe pulling bits of, latest posts and they do a lot of like randomization and they have like almost like little clusters of things, et cetera. Because I was looking at some of the Monica Hensinger stuff, my new area of interest now. You know the detection of duplicate and near duplicate content patents from over the years as that's been about seven iterations of that. And that looks at how snippets or not snippets, it's the wrong word, but clusters of content are sort of like marked from a page and then compared with each other to sort of see whether any bits are the same. So take where the template, the boilerplate areas. But I'm just wondering whether over the years, because people's sites have changed and the nature of them has changed because there's all these dynamic things now. Would that be fair to say that near duplicate now is probably not what it used to be? I think you could probably say that for pretty much all parts of search because we have to kind of adapt to what's happening out there on the web. And that's a part of what I do, what my team does is we try to help optimize the search engine. So that's something where we'll get feedback from users, from site owners, we'll see that we're not picking up the content properly, that we're, I don't know, crawling too much and we're not crawling enough. And sometimes that comes back to issues like this where we're not recognizing that these should be duplicates or we're recognizing that these are duplicates, but actually they shouldn't be duplicates because they have, I don't know, a different location or different prices on the pages, these kind of things, so. And this is part of the, you see a lot of clustering. You know, you say in search, you could look at the local listings. I'm not talking about the local, but local geolocation. And any shocking queries on the long tail, you see a huge amount of like clusters, yeah? That's almost it, isn't it? It's like that query refinement is near duplicate, but it's not duplicates as such. Yeah, I mean, these things happen all the time. That's, I think that's an interesting topic area and that's something where the engineers, I don't know, as long as I've been here, they've always been reworking that and it's like, oh, all of the duplicate detection, we're going to throw everything away and start over again with something better and something new to try to make that work better and that's something that happens like all the time, that they're always working on refining that. Okay, all right, thank you for that, thank you. May I have a very quick one? Sure. It's about one page site. If it has a higher relevant content for it, it's one keyword. And there is really not much sense for having more pages in the site and it has natural, good, backlink profile or things and things like that. Would such a site suffer from the fact that it is just one page site and there are not more pages on the site or it could very well rank? It can rank just as well as anything else, yeah. I think the tricky part with one page sites is when you have content that's not directly related on the same page, then that makes it harder for us to understand what is the primary... No, it's something like selling a single service. Yeah. Products or whatever. It's just on product one service so not much sense to have more pages. Okay, so there would be no penalty because it's a really small site. No, no. We, as far as I know, we don't have algorithms that kind of demote small sites. It's almost the contrary that we try to recognize websites individually and make sure that especially small sites are at least completely indexed but there's definitely nothing that would demote a site just because it's small. Not the opposite. Promote a site because it's big and it has a lot of pages and backlinks on different inner pages because if you promote walls, basically it's like you demote, you average. I mean, some of that happens just naturally with regards to a big site has collected a lot of signals over the years but it's not the case that we would say this is a big website, therefore it gets to, I don't know, be boosted twice as high as it usually would be. Okay, thanks. All right. Thanks everyone for all of your questions or do you have a question, Kristin? I do. Go for it. Do you have a few minutes? I wanted to ask if you had an update on what Google's doing for their algorithm with fake news. I don't have anything new on that. So that's something that would probably need to come more from the search quality side of things. I'm usually less involved with that directly. Okay, cool. All right. Thanks. All right, great. So let's take a break here. Thank you all for joining. Thanks for popping in, asking questions and submitting lots of questions. I hope you all have a great weekend and I'll set up the next hangout so that we can see each other again in the future. Thanks, John. Bye everyone. Thanks, John. Bye.