 OK, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Webmaster Office Hours Hangouts, where webmasters and publishers and SEOs can join in and ask us any kind of web search, website-related questions that we might be able to help with. Looks like we have a bunch of new faces as well. Do any of you who haven't regularly made these Hangouts want to get started with a question? If not, that's fine too. We will definitely have more time for questions towards the end. Or if you have anything to add to any of the other questions that were submitted, feel free to jump on in then as well. I've got a question. OK. It's all right, you can go first. No, you can go first. No, you can go first. You can go first. All right, OK. OK, so the other week there was a little bit of an experiment done by somebody who turned their crawl stats up to high. Yeah, I don't know whether you saw it on Twitter. It was Oliver Nelson. And I'm intrigued. You know, the crawl stats that have like the defaults in there and the site settings that kind of seem to be like gathered all the time. Are there other bookies that the settings kind of fall into? So a lot of them seem to be set to 333.33 seconds between requests. Yeah, there seems to be like certain defaults, that seem to maybe fit in with different types of CMSs or different types of platforms. Or it's almost like, well, if it's this type of site, we'll do this. But if it's on a CDN, we'll add this bit. And if it's got this much traffic, then we'll do this. Or it seems to be like very clear patterns. And it seems it's like a combination of various factors that you could just very easily just sort of ascertain what they are, really. If it's on what managed to serve independent server, if it's on a CDN, if it's on Shopify, if it's different types of CMSs, I don't know. There seems to be some patterns there. I don't know. I totally didn't see that. So it's hard to say. But I mean, we take a bunch of stuff into account when it comes to determining how fast we want to crawl. And part of that is also what we actually have to crawl or that we want to have indexed. So it's not always just a technical limit. It's sometimes just what we think about the site, what we think makes sense in regards to crawling. So if it's a site, for instance, that has had issues with maybe quality or low importance, lots and lots of too many pages, yeah, that if you switch it to high, you're not necessarily going to get a massive crawl there. If you say, please crawl, it's a request rather than an actual instruction, isn't it? You can still, it's not to do with speed, it's to do with other stuff as well, isn't it? Yeah, yeah. I mean, it's more like setting it to high, essentially just says, well, if Google wants to crawl more, they're able to crawl more. It's not the case that we will crawl more. And I think it's also important to mention here that crawling doesn't mean ranking. So it's not that you need to have lots of crawling in order to rank high. We can rank things high that we crawled very rarely. So it's something where you don't have to artificially increase the crawl rate in the hope that it improves your website standing on Google. OK, all right, but if you've got a lot of stuff from an old website still in the index, and you're trying to switch everything over, that actually, getting this new stuff crawled would help, presumably, to replace the signals that are still in the old and the index from the old site is still there, potentially. But if you've got issues on that site, historically, you're probably just by switching it to high is not going to make, Google's not just going to come flying over and just start tearing through your website, is it? Probably not. So I don't know, you're making me really curious. I'm tempted to go off on Twitter and try to find that. But for the most part, it's a technical thing. It's more a matter of how much we can crawl from your server and setting it to high is kind of a way of telling us, well, actually, Google could crawl a bit more if they really wanted to, but it doesn't mean that we will actually crawl more. OK. Right. OK. Thank you. Thanks. Hello, John. Hi. Hi. How are you? We go ahead. OK. Recently, we faced an issue with one of our client website. I think they have some problem with their web developer and their web developer has access to their Google Web Master Tool account. Now, the web developer, what they did, they used Google Web Master Tool temporary hide option to remove the website from Google search weather temporarily. And they did it. And as a result, when we search on Google using their brand name, we did not find their website on Google search weather. And we also found that their URL is removed from the Google business listing. So now, we also found in Google Web Master Tool, there is an option for re-include the website. But every time we click on the re-include button, Google Web Master Tool show a message that there is an error, please try a few minutes later. So we tried for two days. Every time we try to click on the button, we got this message. So is there any other way to fix the issue? The re-include button is really the right way to do that. I don't know. What you might be able to do is take a screenshot of the message that you're seeing, and either send that to me or post that in the help forum. And then maybe someone can take a look at that to see what specifically is happening there. I'm not aware of anything general that would say re-including removed URLs should not work. So if you have that button there, then that should actually work. What you might need to watch out for is that you have the full access to the site, also that you double-check the different versions of the site. So HTTP, HTTPS, dub, dub, dub, non-dub, dub, dub, to make sure that all of those are also re-included. OK. And the next question is one of our clients. The designer website is a little bit strange way. For example, they have About Us page. And on About Us page, they have content. Now, they post the content two times on same page, one for the mobile version, another one for desktop version. So when I open the website on desktop, I can only see the desktop version. But there is also the text appearing, but it's hidden. So now the problem is, is this hidden text will be a problem? That shouldn't be a problem. That's something that we can still see if it's visible, if it's kind of in the HTML of the page. So it probably doesn't make sense to hide that kind of a text. But at the same time, it's not going to negatively affect the rest of your website. Because the problem is not only for one page, this is for their whole page. Have same issue. I don't know how they designed this website. But we found the same issue on the whole website. And they launched this website in the beginning of this year. And after launching the website, they lost the ranking. They were in very good ranking with the whole website. But after launching this website, they lost ranking for a lot of keywords. So do you think this is the effect? I mean, this is something I would probably recommend fixing. But I doubt that would be reason for us to kind of rank the website lower. I think the thing to keep in mind, whenever you do a complete relaunch of a website, even if you use the same URLs and often don't use the same URLs with a relaunch, we have to reevaluate the website based on the current status. So changes in ranking are always possible when you do a relaunch of a website, especially if there is less content on the page, if the important content is not available on the page anymore, if some of the content has moved into images that we can't kind of read the text out of, all of these things can play a role when you do a relaunch. And essentially, we're looking at the new website and trying to rank it based on what we see there. OK, John. Thank you, John. Sure. I mean, it's always tricky with these kind of changes that happened quite a while ago. So I would clean this up, but I would also just kind of look at the old version of the website in archive.org, which is a way of kind of looking at the older versions of Pages. And just double check to make sure that you didn't lose any content with this redesign and that the internal linking is still pretty good, that we can crawl everything, that we can pick up all of the content with that relaunch. Yeah, thank you. OK, let me run through some of the submitted questions. And then we'll have more time for comments from you in between or questions afterwards as well. It seems Google is testing longer descriptions and search results. Should we prepare to include longer content in our meta description tags? In general, you can do that if you want to. This is not something where we've ever kind of restricted you and said you shouldn't put this much content in your description tag. Sometimes we will pick up more content from a description meta tag. Sometimes we'll take less. Sometimes we'll take some content from the body of the page as well. All of these are things that are essentially a part of the normal kind of organic search results. So it's something where we've always kind of tried to figure out what makes sense to show to a user there. And it's also important to keep in mind that we change the description and the title based on the query sometimes. So if you use the site query to see how your pages are showing up in search, that's not necessarily what a normal user would see when they do a search. So I would take the queries that you see in Search Console and search for those specifically and then look at the way we show the search results there to get a bit of a better understanding of how we're currently presenting your pages in the search results. We attended a Hangout a few weeks back as we're having issues with our home page, not ranking for a brand name. I think we looked at this in the Help Forum. So I don't know if you're here or not. But I think the tricky part here is that this is a website that essentially is offering adult content. So it's something where it's sometimes really hard for us to figure out how we should be ranking it appropriately with regards to different types of queries. So that's something where sometimes you run into this kind of a tricky area where you think maybe your home page is kind of safe search friendly in a rough sense of the phrase. But our algorithms, when they look at your website overall, they still see all of this adult content around there where you're selling, I guess all of the sex toys that are being sold there. That's something which we also try to take into account. We try to figure out where does it actually make sense to show these web pages for which type of queries. And in talking with our engineers, they understand this is a kind of a weird edge case situation where maybe we could be doing this better or maybe we could be doing this better. But it's also kind of the situation where it's essentially an adult kind of website. So that's what our algorithms are mostly treating it as. We've recently had an SEO consultant recommend that we make our 404 error page and onsite search no results pages unique by adding some kind of dynamic information to the pages. Do 404 pages and no results found pages really count as duplicate content in the eyes of Google. So for us, if a page returns a 404 result code, we ignore all of the content on that page. So if this is a 404 error page that's returning 404 correctly, you can do whatever you want on that page to make it user friendly, to make it usable for users. But Googlebot is not going to take that into account. So you can put links on there to appropriate products, maybe related products, if you can figure out what they were trying to look at. That makes it a lot easier for users to actually stay on your site and do something useful on your site. But Google is not going to follow those links or take that content into account. The same roughly applies to no results found pages, which are essentially soft 404 pages in our mind, in that essentially someone is searching for content and you're saying, well, this content isn't available on my website, and we treat that as a 404 page. So that's also not something where you need to artificially introduce content or links or anything to those pages to make them appear unique. It's perfectly fine to have those be generic, to have those have kind of the same content across the pages there. Disavow, not required, as Google discounts the links. So when Google decides to take manual action for partial unnatural links, so partial unnatural links manual action is when the web spam team looks at a website and sees that there is a group of links pointing to the site, which are unnatural, which are kind of things that we wouldn't want to take into account. And they essentially discount those from through the web spam side, and they let you know about that. So if you want to clean that up, you can do that. But if you don't want to clean that up, that's generally fine as well. So from that point of view, that's essentially kind of doing the same thing as our algorithms when they try to take into account unnatural links. One kind of common situation where this happens is when someone is trying to do reputation management and artificially push some article on your website that they like to rank higher than the content that they're trying to have shown lower in the search results. And from our point of view, it's not the case that you're trying to spam Google by placing artificial links to these pages. It's someone else that's trying to do that. So the web spam team tries to clean that up and just say, well, these are the things that we're not going to take into account. And they let you know about that kind of just as a matter of record. And if you're fine with that, that's perfectly fine to keep it like that. A couple of months ago, I asked whether you could provide. Oh, go for it, Dawn. On that subject, on the subject of disavows, I know that there's been quite a bit of a hot potato of layer, whether you should do it or whether we should just trust Google to understand when something is really kind of a negative SEO-y type thing. So if, for instance, you suddenly start because one of my projects suddenly started getting some really, really nasty links and pretty horrendous messages of me personally in the links, should I disavow that? Or should I just presume that I can trust Google to realize that it's just people are a bit nasty sometimes? I don't know. Personally, I would just disavow those to kind of say, well, I'm sure Google's not going to take these into account if I disavow them. And then it's like you don't have to worry about whether or not Google's algorithms will decide this is good or bad. Then you've essentially just taken care of it and don't have to worry about it anymore. I think for the most part, we probably take that into account automatically as well. But it's kind of like for peace of mind and just for sleeping better at night in general, it's kind of like you just want to make sure that this is really taken out of kind of our algorithms and with the disavow. Yeah. So for instance, this one had location-based anchors in it as well. So it kind of makes me a bit suspicious that actually it might be some petitions in the same city as me, perhaps. So that potentially could have quite a negative impact if things around location couldn't it. If it wasn't, it picks up on well by your algorithms. I mean, if you run it, I would just disavow them. Disavow. Yeah. I think the average website that doesn't dig into their links, they generally don't need to care about this. They're not going to know about these links and about the disavow tool. And that's perfectly fine. But if you're looking into your links and kind of worried about these things, I would just disavow them and move on. And then you really don't need to worry about what happens with the rest of Google's algorithms. And if these are the ones that showed up in Google Search Console rather than just the massive wealth of other links that are out there, that's probably one that I could think, Google knows about this link. So actually, I should probably disavow it because it's going to be taken into consideration somehow because it's in Search Console. Even if you, I don't know, I'm not 100% convinced that actually it's going to just be completely ignored. I did see, like, I've seen fluctuation around these things. Yeah. I mean, if you see them and you're worried about them, I would just disavow them. Just because they're in Search Console doesn't mean that we actively use them in our algorithms. It's just we've seen them and we want to let you know about them. That's kind of tricky to make the assumption that actually this has an effect just because it's listed there. But I would just disavow them if you don't want to be associated with them and then kind of move on. Hi, Joe. OK. Thank you. I have a little question. We have noticed a deep, deep craze in visit coming from Google News and Ham pages. Science, the beginning of October. There have been many reports of similar situation on Google Webmaster Central by many websites. This has been confirmed to me also by other online newspaper we are working with. It has been Google News algorithm update with regard to how articles are displayed into the carousel or other. So I guess with regards to the top stories carousel. Top stories. That's something that from our point of view is an organic search feature. And since it's fairly new, the engineers and the quality team are actively working on that. So I would expect changes to take place there over time. Normal at the moment in that we get a lot of feedback about the content that we show there. And we try to make sure that it's taken into account and that we can show relevance results there. And sometimes it makes sense for us to say, OK, we'll be a little bit more cautious and maybe show a little bit less here. Sometimes we can be a bit more direct and show more content there. But it's really something where the engineers are actively working on those algorithms and trying to see what works best, which kind of studies or algorithms don't work so well there to actively work on that. I have two questions about these two topics. So going back to the first topic was the disoalt file. What is your personal thought about creating a master disoalt file, John? Like, for example, I've been working with a lot of websites. So during the time, I collected thousands of sites that are spammy for sure. So what do you believe, like, do you think creating a master disoalt file and uploading this to all the sites I manage is a good strategy or not? I don't know. I mean, it's something where if we don't have links from those domains and you disavow them, that doesn't change anything. I don't know if that actually helps under the line or if it's essentially kind of a placebo in that you're uploading a disoalt file, but it actually doesn't have any effect. So you're kind of putting extra maintenance burden on you to kind of maintain these files that actually don't even play a role for those specific sites. So that's kind of where I'd say, well, maybe it helps to analyze the links that are going to a site and say, well, these are all known spammy ones. Anyway, I can kind of automatically put those on my disoalt file when I see them coming in. But for the rest, I try to actually focus on what is useful for the site, rather than just to say, well, this is my generic disoalt file. I upload this to all my domains because I don't think that that really provides a lot of value in the sense it has a positive effect on the site. Yeah, I understand. But it doesn't necessarily mean that there is no trust factor about disoalt files. For example, if my disoalt file is complete mess, you still look at my disoalt file as a directive, yeah? Yeah. Yeah. OK. That's perfect. I mean, I see it more as a maintenance over there in that you're uploading all of these files. And if the next webmaster doesn't really know what's happening, they look at this file and they're like, oh, I have no idea what this is supposed to be and what I'm supposed to be doing with it. Should I be updating this monthly? And in the end, it turns out, well, actually, none of these links ever played a role for this website. So it's kind of, I don't know, unnecessary work. Yeah. And the second question is about a recent topic. And it also goes to the news as well. So in the Black Friday, the people, I mean, the Black Friday is about a sales event. And during that time, it becomes also kind of like a news 40 search query. So from what I've seen, there are many sites that are producing a lot of sales event related content, but also newspaper use their freshness signal advantage in a way that they start producing instead of news 40 articles, rather than they start creating some commercial articles. And then they start ranking number one for that query during the Black Friday, and which has millions of, I mean, search volume. So I believe this kind of creates a little bit of an interesting user experience, because as a user, I mean, if my query is informational, I will let you get information about the Black Friday. But I think during that time, it becomes rather commercial than informational, but Google still shows news sites rather than sales websites. What do you think? I don't know whether this helped, but I was at a recent thing not long ago. And actually, certain events have shift, query intent shift. So Black Friday might mean something differently today, as two words, just words, to next week or last month. And I think if I'm not mistaken, what happens is search engines kind of understand what people are specifically looking for with just those words without much else based on popularity. Query intent shift, I don't know. I don't know whether that is kind of, it's around Easter. The example I had was like Easter means different things just as that word. A month before Easter, people are looking for when it is. So Black Friday, a month before, people want to know what dates it's on. Two weeks before, people want to know the Easter where they can go. Then at Easter, it's how people want to know the meaning of it. So I don't know whether that is just shit. I mean, it's something where it's definitely worth getting feedback on these type of things. But I don't think there is this one absolute answer that is perfectly correct for situations like that. It's something that changes over time. And I think some amount of newsworthy content also belongs in there because if people are searching for Black Friday, what commercial stuff would you bring down? It's like, everyone has a Black Friday sale. We might as well show just pure ads across the board. I don't think that would make sense either because you're not really saying what do you want to see for Black Fridays? You want to see the total revenue that was turned over? Or do you want to see which stores have Black Friday sales? It's such a generic query that I don't think there is this one thing that you could show there that would actually be the correct answer. I agree, actually. Very much. I agree in a big extent. But what I was trying to get into is as an ex-WebSpam fighter for five years as well, I think it becomes like a bit tricky that most of the big publishers is doing. They first create the newsworthy assets in their websites, to rank up there. And after that, most of the newspaper I've seen, because during that day, I didn't even sleep because I'm running 100 sides about Black Friday. And then what they were doing is they were start changing their articles to collaborations with the commercial things. And sometimes they even used not-know-followed affiliate things and stuff. So I mean, newspapers have such a big power. So when they even they shift their content 180, they still rank for a major generic queries. And I think this kind of becomes an abuse. Just wanted to get your opinion on this one. Yeah, I mean, that can definitely be the case. But talking with newspapers, they're usually more of the opinion that actually their content isn't being taken into account that much on Google. And they'd like to see more of it being taken into account. So it's kind of like different people have different opinions there. But it's definitely something where trying to find abuse there is tricky, because these things are very quickly moving. And sometimes they have really, really big waves that are kind of happening there with lots of people searching, lots of people putting content out within a short period of time, where it's really hard to kind of stay on top of it from an algorithmic point of view, but also from kind of an abuse, and web spam, and a quality point of view. So that's certainly a tricky area. Hey, John. Hi, John. We are adding a new hamburger menu to our website, both mobile and desktop version, which is going to contain all the main navigational elements of the page. This menu is hidden by default using JavaScript and shown only when the user clicks on it. My questions are, is Google Crawler able to read the navigational links with this implementation, or it will be considered the hidden content and therefore not read? Does make any difference using CSS display none instead of JavaScript for the hidden menu? As long as the content is on the HTML page, then we should be able to take that into account. OK, even if it is hidden by a JavaScript. Yeah. So if you have a link there to another page on your website, you can see does Googlebot actually crawl that other page. And one way to do this is to maybe put a URL parameter at the end of the URL, and just have question mark test equals 1, and see if we crawl that URL. And almost certainly, we'll crawl that URL, because we have a lot of experience with these kind of menus that kind of pop up and show content and then disappear again after you're finished using them. OK, so that's also valid for the desktop version of the website. OK, and I have another question. Because just a little before, I heard about something that has changed on AMP pages and about, in particular, the carousel snippet. So I would like to know if you have done some rollout that brings some penalty that could affect also this kind of result. That's my question. So if there's a kind of a penalty that would affect the carousel results on AMP, on MPI pages. I don't think we have any kind of penalty on that. The only thing that we have with regards to manual actions on AMP is something that we're planning to roll out, which we can tell that the AMP page has significantly less content than the actual desktop or mobile page, then we won't treat it as an AMP page. That's something we did a blog post about, I think, last week. Usually, this is something which is very obvious that you have a full article when you read it normally on desktop or on mobile. And on AMP, it's just like the heading, and then it's like click here to read more, where it's basically just a simplified version on the AMP page. And that's a really bad user experience, and that's something that we're taking action on. But apart from that, I'm not aware of any other type of manual action specifically for AMP pages. OK, but just to come back to the point, because me and other publishers have seen a huge drop during the first days of October for all the results we had on the carousel, both on AMP and Google News. I've seen myself also a drop in the results for many queries, which we always showed up on this particular snippet. So I would like to know if there has been a penalty or that could be more than right. But we just noticed a drop for these results. So it's something that is very strange. So I would like to know if you know something about this. I think these are just normal organic changes in the search results. So for us, top story is carousel is a part of the organic search results. And we're actively working on this. So we get a lot of feedback from users around the content that we show there. And that's something where the team is actively working on the algorithms behind that. And sometimes you will see a big drop in visibility with regards to the top stories carousel. Other times you might see a big rise in visibility when we actually do show more content there. So that's not something that is a manual action that someone is saying, oh, I don't like your websites. Therefore, you will show up there. It's more a matter of our algorithms trying to figure out, for this query, it makes sense to show it here. And for other queries, maybe it doesn't show it. John, I don't know if you've seen a blog post about the sale event structure data markup spam issue. So first of all, thanks a lot for Trust and Safety Team to bring this topic up. Because I've been in the COOP on industry and helping most of the websites that I have for SEO, for a long while. And I've seen a lot of even the biggest players, like retail and not, I mean, even Google chipped in. They are abusing this sale event markup for a long while. I keep reporting on them and send a lot of reports within my team as well. So now it picked up to a blog post. So does that mean that there will be a manual action sprint going on? Is it going to be really a prioritized action? Or do we need to send even 10,000 more spam reports? 10,000 more spam reports. I think that would be fantastic. That would definitely keep the team busy here. And they wouldn't have to worry about their job anymore. It's something where we explicitly want to make sure that people are aware of this issue. Because we also run across a lot of people that just put a plug-in on their page, and then it adds this kind of structured data markup. And they're not even aware that this is actually against their guidelines. So when we do take manual action on these things, we want to make sure that people are aware of what specifically the problem is. And having a blog post like this makes it a little bit easier for them to see, oh, yeah. Of course, I did that, and it worked. And Google found out now, and I'll fix it. So that's kind of what we're hoping for there, for there. Hey, John, I have a question if you don't mind if I go for it. Cool, thanks. What are the best practices for keyword density? Right, naturally. We don't actually have anything where, in our algorithms, we're like, we need to see this amount of keyword density on a page. Instead, we try to understand what the page is naturally about. And sometimes, that's like you mentioned at once, or you mentioned some random synonym that we can pick up. And that's good enough for us. So that's something where I would write naturally. But at the same time, especially if you're looking at a site that hasn't looked into SEO at all, I would just make sure that at least you're talking about what you're trying to offer there. So I see that a lot with kind of small business websites in that they'll have a hired designer to actually create the website and look really fancy and fantastic. But you can't tell if they're trying to offer a service or a product or if they're trying to sell you a book, because it's such generic content that it's really hard for humans to actually figure out what it is that they want to sell and what they want to rank for. And of course, if humans can't figure it out, then our algorithms are probably never going to figure it out. But if it's a human's aren't reading it, but do the bots actually understand natural writing, like making if they see a word? Yeah. Yeah. We have a lot of practice with that. And especially we could tell if you're artificially stuffing keywords in there, then that's really obvious for us to figure out. Like two or three times versus naturally, or maybe instead of writing it, you would actually write the keyword, maybe an extra time. I don't think it has any big positive effect there. So I would generally just write as natural as possible so that when your users read it, they actually don't get turned off and run away and this guy is just trying to push this down my throat. But actually, I'm talking the same language as my user and I understand what problems they're facing and what I can do to help them to solve these issues. Cool. Thank you. Yeah, no, that's what I feel too, but you never know. I mean, you hear so much about keyword density and sometimes you may see results that you think maybe you do because the keyword on the page a couple extra times. So yeah, no, thank you for your insight. Well, if I can comment that short, but I believe trying to understand the Google's algorithm, like how they rank the keywords, rather than thinking about the concepts of the algorithm is really helpful. Like for example, the main three concepts, what I can see is the term, yeah, of course, term frequency, but it's just one of them. Then it comes to the co-occurrence and then it comes to like TFIDF, but at the end it's about the topicality. So I really love the word that John said, like right naturally and rank naturally. Well, I don't know whether anybody's heard of third, thirdian, is like, you shall know a word by the company it keeps. I always try and think of that relatedness, so the topical relatedness, that's quite a big, he's quite a big, well-known linguistic chart. So I know that people are kind of moving back to the whole like he were density thing, which 10 years ago I think we were all talked about that then, but still TFIDF must come into some sort of, it's kind of more or less the same, isn't it, really? But it's just been packaged, it sounds better. It must have a back, but it must play a part somewhere along the line. I think it's really tricky in the sense that people tend to kind of focus too much on the individual words in a lot of these situations. And then what comes out is something that sounds so unnatural that users don't really realize that it's actually written for them. So that's kind of the tricky part there. And I think using some tools to figure out like what people are actually searching for and making sure that some of that content is actually on the page definitely makes sense, but I wouldn't go so far as to say, well, it has to be on the page like two or three times. And every time I mention like in a forum, it's better or I have to put like different spelling variations in there. All of these are things that our algorithms have gotten really good at doing. And especially when you're talking about English content where we have a ton of practice, that's something where I really don't think it makes sense to kind of artificially try to push things in there that wouldn't naturally be on a page. Well, I'm sorry. Sorry, I apologize. Sorry about that. Yeah, my bad. John, you just said something pretty good about, you know, to have some tools to check it out. Do you have any recommended tools for us? I would use Search Console. Yeah. I mean, it's something where I think you can't take any generic tool that just like scrap your website and says, oh, this word is only mentioned there twice. And you need to mention it three times because I don't really think that that is that actionable. Thank you. John, what do you think Google will do? For example, if everyone stops using Chrome and then everyone, let's say, starts using headless browsers, then how would Google identify the user experience? Great. Great. I don't know what you mean. For example, I don't know what Google does. How does Google identify the long clicks, short clicks, and like the bounce rate and stuff? Because they need to understand these lists and events. And yeah, you can do it by maybe in Chrome. But let's say users are using headless browsers. So they are sending without referrals all this traffic. Then Google will not be able to identify if this is organic or not. And they will not be able to listen where they clicked. So then how would Google identify it? You're making a lot of assumptions there. I think there are lots of kind of assumptions in there that don't really play a role. And especially if everybody was using headless Chrome or headless browser, it's kind of, I don't know, an unrealistic world. And from our point of view, what we mostly look at is try to figure out which algorithms work best for ranking. So from that point of view, it's something that's independent of browser. It's something that we've been doing before there was Chrome. When people were using Firefox or the early versions of Internet Explorer, we still have to evaluate our algorithms to figure out which one of these algorithms is actually working better or not. And these are things that kind of continuously work. And we continuously work on trying to improve them. John, there seems to be a lot of conversation around. We're going to end up, the URL is going to die, yeah, because it's not scalable. It seems to be scalable because you've got the likes of, oh, God, what's it called? The guy basically wrote the botanical web. He's basically built stuff that is scalable to every URL in the world and picks from patterns and so on and so forth. So it kind of is scalable because there's lots of signals about what is worth, what to crawl, what to just do this with that within the other web. And you can scale it on like a very big scale. Is that right? I don't see the URLs going away any time soon. I think, as a way of addressing content, that's essentially a pretty restable system. It just basically works. Imagine on some devices, which is also fine, on mobile, you don't really have room to see the whole URL. People don't really need to see the whole URL there. But in general, I don't see the URL going away in the sense that we'll have some magical new way of addressing content on the internet and be able to look it up and kind of understand what it's about and point people to that without using a URL. I mean, who knows? In like 50 years, maybe things will be very different. But at least in the meantime, we have to have some way of addressing content and pointing people at content. And URLs just work really well for that. Let me run through some of the other questions that were submitted. And then we should have a bit more time as well for others here. We migrated from HTTP to HTTPS, but we can't collect the HTTPS version of our site and disavow. We're unsure if Search Console needs both properties. What do we need to do? Yes, if you migrate to HTTPS, then you should add that to your Search Console account as well. And then you can use that for the disavow tool. So you can kind of download the old file and upload it with the HTTPS version. Let's see. Using hreflang will swap the URL and a new CCTLD might be ranking lower. So why should I do that? So when it comes to hreflang, we actually should kind of swap out the highest ranking URL. So it's not the case that we will rank your website lower because you're using hreflang. But rather, when we rank pages from your website, we will try to look at the top ones. And if we have an alternate version that matches the user's location and settings better, we'll swap out that one. So it's not the case that your site will rank lower because of using hreflang. hreflang doesn't change rankings. Let's see. You said all index pages will be evaluated when determining quality of a site. Let's say we have 100 posts published and 400 thin pages published by accident. Maybe WordPress media attachment files. Even though Google isn't sending traffic to those 400 thin pages, can we say, oh, we've indexed 400 pages and a bunch of them are bad? So we're going to demote your site. Google does look at your site overall. So we try to take a look at the whole picture. But we do understand that not every page is as relevant for the website as other pages. So just because we've accidentally indexed a bunch of kind of crafty content doesn't necessarily mean that your site is bad. So I wouldn't really worry about this. But at the same time, if you're noticing that this is happening and you want to take control of that, you can take control of that by either putting no index on those pages. If these are just accidentally published pages or maybe even serving a 404 or a 301 redirect to the actual preferred version, if you have kind of a mapping there, all of those are good ways to kind of take control of this. But overall, we understand that websites have a lot of crafty content that's generated automatically by a CMS. And that's no problem for us either. John, John, is it evaluated at a sectional level? Sorry, just really quickly. Is it evaluated maybe at a site section level? So say you have a very content-rich blog. It's really engaging. You have really useful calculators, loads of wonderful stuff. But then you have this really weak and thin and pretty sort of repetitive pages section. There's a big part of it. Would you judge those two parts differently once you realize that actually this is a better section. Not section. Sometimes that does happen, yeah. So one of the places where that can happen is when it comes to crawling, where if we can recognize that this is actually part of a website that's kind of irrelevant and just a lot of craft that's generated with URL parameters, for example, then we'll try to kind of weigh those sections differently and say this is the part that we really want to focus on and this is the part that we kind of know is there and we kind of look at every now and then, but it's not the primary part of the site. OK, OK, thank you. OK, there's a question from Jarno, which I think is interesting as well, with regards to structured data. So I think he asked a while back with regards to putting all kinds of structured data markup on a page. And I said it probably doesn't make sense to just stuff a page with structured data. And on the other hand, Gary at a recent conference said, well, actually, we do take into account a lot of structured data even if we don't use it for kind of a rich snippet. And from my point of view, I think both of those are still pretty valid in the sense that we do use structured data to try to understand the context of a page a little bit better. But the direct effect is more of the visible side, where this is what we show in the search results when it comes to rich snippets. So I would still try to find a balance there and think about where it does make sense to provide extra context. Maybe that's something that you can do pretty much for free. If you have structured database that you're putting into your website and you can mark up all of the names or mark up all of the locations, and that's something that kind of comes for free, then maybe go for that. But on the other hand, I wouldn't go through the schema.org website and think about all of the possible things that could be marked up and just manually go through your website and try to add all of that markup as well. Because that's just a lot of extra bloat that you're putting into your pages, and probably they don't really help us that much at all. So that's something where you kind of need to find a middle ground there with regards to putting markup on pages that you think either would be shown in the search results as a rich snippet or might be shown in the near future. Maybe you kind of see where Google is headed and you think, oh, Google is going to show this type of markup next. Then maybe that's something you kind of want to bet on and put on your pages. Additionally, if there's something that you kind of really feel strongly about or that you can get for free from your CMS from the database that's powering your website, then that might be something worth adding. But I kind of caution away from just blindly adding all possible kind of structured data because that probably is just a lot of extra work, a lot of extra maintenance overhead that doesn't provide any value for search. What is your advice to optimize for artificial intelligence? I don't actually have a lot of input. That's like a very broad question. Yeah, I don't think I have anything that I could add within a minute or so around that. But it's definitely an interesting topic. Let's see. Oh, gosh. So many questions. Wow, they all come while we're live. I just have a few minutes left. So maybe I'll just open it up to you all. What else is on your mind that I need to kind of get through and answer? Yeah, Jordan. Yes. Go ahead. Or maybe not. I'm sorry about that. Is that a garden norm? Yeah, no. That's not real. John, I've got a really quick question. If the guy is the norm, I don't know. I have got a really quick question actually. I remember a long time ago, you said something and it's resonated with me for a long time and I keep thinking about it. So you said sometimes when there are sites with many pages that are not similar, but kind of equally potential candidates for similar extensions of a primary target, yeah? So choose lots of different types. Yeah. So I remember you said that actually that kind of setup could mean that ultimately you can't rank for anything generic. Is that because you have lots of children that actually are also, and maybe grandchild pages that are also candidates for a categories target term? Everything gets divided relevance wise amongst all the children and the grandchildren and so on. Yeah, so stealing almost, yeah. Does it mean that ultimately you're probably never going to rank the category as such because relevance is spread? Remember Matt Cutts saying something about division between lots of different pages? And I remember you saying it means you can't rank for anything generic. I don't know. I don't quite understand the context. I'll think you've not got time now, but I'll have a thing that I would say to put it a bit better, yeah. OK, I see one question in the chat as well. We have many internal filter links on category pages of an e-commerce site which have no follow. Is that OK? That's pretty much OK. Yeah, I think you can do that. We have some, I think, blog posts and help center content on faceted navigation. So I would look that up and see what the advice is there. There are different ways that you can handle this type of situation. Using a no follow there is probably OK, too, in the sense that we're probably not going to dig too far into those facets anyway. So that kind of helps us to focus on the actual results pages there. There are many different ways of looking at this. So what I would do is double check our help center and the blog. And if you're still unsure, I would post in the Webmaster Help Forum with actual examples from your website so that people can take a look and see, is this actually good? Or is this something that you need to change? Or is this good enough and you don't really need to spend too much time on it? One quick question. It was very interesting to see that in one of the domains that I managed, they had a retail page, let's say X. And unfortunately, the PPC team created a landing page for Adverse, but made it indexed. Then I realized it the next day and I know indexed it. Then I even actually, yeah, I know indexed it and I waited. Then the ranking starts fluctuating. It was ranking number one, number two, for the very important keywords that it should rank. Then it starts fluctuating to number three, four. And then I gave 404 and nothing changed. Now I gave 410. So that created a kind of like a duplication in between the size, but it was unintended and I covered it right away, but it still keeps fluctuating. I don't know why. What you can do in cases like that is use a submit URL feature in Search Console to tell us that this page has actually updated very quickly. And that's something you can do in Search Console with the Fetch as Googlebot tool. And in the search results, we also, for some queries like submit URL to Google, we show this one box that you can also use for that. So that might be an option the next time, especially if you set it to no index. And if we can crawl it and see the no index, then we can take that into account fairly quickly. Or if you use a rel canonical on there to point to your preferred version, that's also something that we can take into account fairly quickly. But if you don't tell us about that change, then maybe we'll kind of keep it in our index for a couple of days. And then when we actually do re-crawl it, then we'll take that into account. Yeah, John, is there a priority? Do you remember that question I asked you on Twitter the other day about the fact this submit to index? Is there a priority given to some site or some URLs, not sites, URLs or others? So say, for instance, CNN's on pages likely using the API anyway of things. But say a really important whole page of a site, get submitted versus some low-level crappie, whatever I say, crappie coming up. Yeah, page that you know has got some issues. Are you going to react quicker with submit to index on that one? The question is, yeah, do you need crappie code? I mean, we see a lot of abuse with these features, so we have to find a way to kind of balance that. So that can definitely happen there. All right, so it's been great having you all here. I need to take a break and go run off. But I have the next Hangouts plan for Thursday in German and Friday in English again. So feel free to copy your questions over there if you still need an answer to something. Thanks again for joining and hope to see you all again next time. Thank you, John. We appreciate it. Bye. Bye, everybody.