 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where folks can join in and ask their questions around websites and web search, and we try to find answers. Bunch of stuff was submitted, even though it's just a few days, but lots of things lined up. But kind of as always, if any of you want to get started with the first question, you're welcome to jump in. Hi. I have actually two questions. So first one is about URL length. Listen, we have got a client. So when I checked his website, he said it's about header. So his category URL are like this. Domain URL slash header slash odeter slash freestanding odeter. It's a very long, and he used the keywords a lot of time. So is it bad for SEO? What do you suggest in this case? That's fine. I mean, we use URLs to identify the content. So if it's long or short, that's essentially up to you. When we have multiple URLs that have exactly the same content and we have to pick a canonical, then usually we tend to prefer a shorter one. But usually that's within your own website. So if you have kind of the short version and the long version and they both show the same content, then we try to pick the shorter version. But it's not that the shorter version has a ranking advantage or anything like that. And the second question is about the backlink source. So if an English website gives backlink from another English website and another backlink from non-English website, do you give same importance to both URL or it varies based on the language? That's all the same to us. So it doesn't matter the language of the website. I mean, what happens is we try to understand the anchor text and how that belongs within the context to understand the page they're linking to a little bit better. But it's completely normal for websites to have links from all kinds of other websites from different countries, different languages. Thank you. John, perhaps I may chime in before you go over the other comments if that's OK. OK. But I just wanted to ask a trivial question or perhaps not that trivial regarding interstitial pop-ups or what we are currently having is a discussion between our sales team and the SEO people regarding how we actually get consent from our users. I tried to go on. Sounds better. So we are having a discussion with our sales team regarding how we actually get consent from our users. And there is one option, which is this bar at the bottom of the page. And the other one, which says people more is to have an interstitial, which actually is overlaying the content and even trying to gray out the background content. And you said on a recent tweet that consent as such is OK if you have the cookie bar at the bottom. However, it becomes a problem if you really have these interstitials which are overlaying any content. And I wanted you actually to perhaps you can elaborate a bit more on what could be possible when it comes to interstitials. Perhaps we don't gray out the background or because it's basically a means of increasing the people to give consent to allow us to display advertisements. What's your advice when it comes to this? So I guess first of all, I don't know. What the legal requirements are that you might need to watch out for. So that's kind of like on the side. I don't know if my suggestions would be compliant with what you're looking for. In general, the kind of a bar on the bottom of the page, that's something that's really useful for us because then we can still see the rest of the page. So in particular, we can recognize how much of the page is mobile friendly. We can recognize which are the prominent elements on the page, all of that. That's really kind of the thing that makes a lot of the normal web search processes just kind of work by default. That's kind of the one side. The other, I guess, extreme that we sometimes see is when people redirect to a different URL for kind of confirming that they're allowed to access the content or age interstitial or something like that. And there the problem is when you redirect to a different URL, Googlebot will follow that redirect. And we will only have that interstitial content to index. So there is no other content on the page. It's not even the right URL. So what usually happens in those cases is we index the interstitial. And we think all of the pages from the website are the interstitial. And basically, we do eliminate all of the pages towards that interstitial URL, which is kind of the thing that you really, really want to avoid unless you want to remove your website from search. No, we don't. And I guess the alternative that you're kind of considering is somewhere in between in that you have a full page interstitial that you remain on the same URL. And there, it's generally more a matter of how you implement this in a way that Google can still see the normal content. So in particular, if you're using JavaScript to display like a div on top of the page and the rest of the page is still there, then that's something where we can still see the rest of the page. And we can kind of work with that. So those are kind of the different approaches there. One other thing that I think I have seen some sites do is to go into the area of what is it? Flexible sampling, almost, where they kind of request that a user signs in in order to see the content. And by signing in, they also have some kind of cookie consent or whatever attached with that. And for us, that's usually less of a problem because we would see that as flexible sampling. Like you're saying, well, my content is here. You just have to take these steps to actually see the content. So that's perhaps another approach that might be worth reviewing. Perfect. Makes sense. Thanks a lot, John. Sure. Hello, John. Hi. I'm in this side. Actually, I do a lot of problems. I mean, I wanted to ask from you, but I can only ask some things from you. Actually, recently, I have got rejected reconsideration request. I have filed reconsideration request for my client. OK, so I mean, Google has sent us two sample links. I mean, you guys are sending us two sample links as an example of natural links. So I found that one of the links carries no follow tag. OK. And I don't think, I mean, Google is considering with no follow tag links as an important link. I mean, you understand my point? I don't think how you guys can send no follow tag in a sample URL. And can you please explain this thing? That could be something that went wrong on our side. But generally speaking, if you did a reconsideration request, especially for link spam, and you got it back saying that it's not good enough, then I would focus on the bigger picture. And not worry too much about that one no follow link. But really, think about what kind of bigger picture things have you been doing with regard to links that you still need to clean up. And sometimes what can happen is that they will flag an issue and say, well, this is the kind of thing that you should be watching out for. It's not like these are the two remaining links that you need to fix, but rather here are some ideas of issues that you need to clean up. So I would take it more in that direction and really try to clean those links out as completely as possible. Sometimes the folks on the forum can be helpful for these kind of things where they can say, oh, you still have all of these millions of links. And they can guide you towards the things that you really need to remove. OK. So John, we have also identified many of the links are coming from the feed pages. I mean, after checking all the external links that we are getting, the backlinks we are getting in Webmaster, so a lot of feed pages are generated. And what should we do about them also? Or what should we do? Feed pages. What kind of links did you mean? I didn't understand. Feed, feed, feed URLs. Like, how do you mean feed URLs? I mean, we cannot be able to open that kind of URLs. They are just, if I open that kind of URL, that will download, it got downloaded in our PC, in our system. So you have some examples also? Yeah, that's correct. RSS feed. OK. Usually, that's less of a problem. But usually, the RSS feeds are generated based on the content on a blog, for example. So that's usually where, if there's a link in an RSS feed, then probably that's due to a link in a blog post somewhere. And cleaning up the blog post fixes the RSS feed automatically too. So what do you suggest? I mean, should we consider feed URLs also to be in order to disavow them? I mean, I don't know if. No, I wouldn't worry about disavowing the feed URLs. It's something where, if you've kind of gone out and placed blog posts on other people's sites, and those links also show up in the feed, then maybe it's best to just disavow the whole domain, to make sure that you have everything covered. Because if you've been placing links on other sites and they show up in the feed in addition to the site, then it's not the feed that's the problem. It's kind of the general practice of going out and dropping links on other sites. Great, John, one last question. I mean, John. Sure. So I mean, if we will place naked URL in building, any place in the website, so will it be considered as a natural link for if I'm placing any naked URLs? So without a link or? Without a link, without any context. Without any, well, if it's not a link, it's not a link. You should be fine. Yeah. OK, fine. Thanks a lot, John, for your help. Sure. Let me run through some of the submitted questions and we can get to more live questions afterwards just so that we don't completely ignore all the things that people lined up. Because a lot of times people just can't join because of the time zone, so I want to make sure that we can try to get some of their questions through. OK, the first one is about an AMP website. They changed all the internal links to point to Google's cache URLs. So basically even on desktop, it always goes to the cache. A few months later, I had some negative results in terms of rankings and change back. So the question is, how should I do it, essentially? So first of all, I think pointing to the cache URL is generally a bad practice because these URLs can change over time. It's not that those URLs will always remain the same. So personally, I would link within your website, link to your own URLs. And if it's loaded on the AMP cache and they can swap out the URLs against the AMP cache URLs with the cache where it's displayed, that's fine. But I wouldn't kind of generate a website structure by pointing at AMP cache URLs. The other reason why I wouldn't do that is that as far as I know, a lot of these caches are blocked by robot stacks. So if you're changing all of your internal navigation to point at URLs that are blocked by robots text, then that's going to definitely have a strong effect on your website. So my recommendation there would be to link within your site and keep those links within your site. And if it's loaded on the cache and it swaps it out for the cache, then that's kind of on the user side. That's not something you need to worry about. But from crawling and indexing, definitely make it so that we can index your site and not have to kind of worry about the cache. I'm seeing random errors of Google Tag Manager verification failures for a search console for our sites. And you link to a thread. I'm not aware of anything specific happening there with regards to tag manager verifications. But I'll definitely take a look at the thread and see if there's anything more that we need to figure out. So try to find something there. We often receive DMCA violation notifications. We're not pirates, so disputing complaints and dismissing them oftenly, but it takes time. If it weren't for the fear of Google discontentment, then we wouldn't be doing it. Does the number of DMCA complaints affect the ranking of a site? Is it possible to get rid of a bad reputation? So I double-checked on this, because I wasn't sure what we actually announced at the time. But I think 2012 around, we did a blog post saying that the number of valid DMCA complaints is something that we would take into account for ranking. So if you're dismissing them, if they go away, when you kind of flag them as being wrong, then that's perfectly fine. But if your site is collecting them over and over again, then that's something where our systems might pick up on that. If we have breadcrumbs only on the desktop site, how will it affect the snippet on Google? So in general, when we switch to mobile-first indexing, we only take the content from the mobile site into account. So if you only have some kind of markup or some kind of content on the desktop version, and it's not available at all on the mobile version, then we will not have that available for indexing. So breadcrumbs, if they're only on desktop and not on the mobile, and we switch to mobile-first indexing, we will not have your breadcrum markup anymore. With regards to putting breadcrumbs on the bottom of the main content on mobile, that's perfectly fine. It's also possible to use kind of responsive design elements where you're saying, well, on desktop we have room and we can display it. And on mobile, maybe we don't have room or we need to display it in a different way, that's also fine. But the markup needs to be there on mobile if you want us to use that markup. We migrated our website on the 13th of December from an old platform that wasn't mobile enabled. It was a major move to a new server, new platform, new category, new URL structure. All the pages and categories have been indexed. And the URL redirect setup, we had some technical issues with 404s, which have been cleaned up. We're working on speed, which was fine in testing, but now too slow. Taking all of this into account, is it reasonable to assume that the rankings won't have settled down yet, or could the site speed be the cause of the low traffic? So I took a quick look at this before. And I think what is happening is you moved within the same domain to completely different URLs. And usually these kind of moves tend to take more time than when you move to a different domain. And it sounds a bit paradox almost, but when you move to a different domain, we can take everything one-to-one and just copy it over, essentially. Whereas when you restructure your website, then we really have to relearn your whole website and understand, again, how all of this fits in together, what the context is of individual pages, how to kind of connect things, what the internal links are like, the internal anchor text. And all of these are things that take a significant amount of time for us to settle down and figure it out again. So I wouldn't assume that the speed is kind of the issue here. We do take speed into account, but that's something that's generally a little bit lower with regards to priority and probably takes a little bit longer to get picked up. But rather, on the one hand, the restructuring takes time. And on the other hand, if you change the design of a website significantly, then that can also affect how we see that for ranking. So for example, if you had a website with a lot of good textual content on the pages and a clear internal navigation, then that's kind of a good thing. And if you redesign into a really minimal version of the website with the same URLs maybe, but the internal navigation is a lot less or there's a lot less content on the pages, then obviously we would rank it based on the less information that we have there. One thing kind of that point in the direction of this not being really a speed related problem is when you look at Search Console. So what I tend to do in cases like this is take a period of time before the change and after the change where you're looking at roughly the same kind of days. So I try to pick the same weekdays. I try to skip over holiday seasons because that always skews the numbers a little bit. And I compare the queries where the site was ranking before and after and the URLs that were ranking before and after. By comparing both of those, you can fairly quickly see if there is any problem with the old URLs, maybe not being redirected. If the most important old URLs don't show up at all in the new time period, then that's kind of a sign. And with regards to the queries, you can often understand a little bit better what kind of changes am I seeing? Is this like a change across the board where maybe all of the queries before and after maybe dropped by 20% or some number? Is it specific to the branded queries or is it specific to kind of the long tail, the non-branded queries? And by looking at those differences, one thing I noticed with your website is essentially the branded queries are all more or less the same, which to me points in the direction of, well, technically, your website is kind of OK. Overall, we're seeing it as generally being OK. Probably speed, which would be something that would apply across the whole website, is less of an issue here. But the non-branded queries are the ones where there's kind of a drop there. And to me, on the one hand, that suggests that maybe these are URLs that we haven't recrawled and re-indexed that well yet, which could be that from a restructuring, it just takes time. Or alternatively, maybe these are pages that look significantly different before and after, where we're kind of ranking the page on what we have indexed now. And the index version that we have now is significantly less useful for us, for whatever reasons. Maybe it doesn't have enough text on it. Maybe it doesn't have a clear heading structure on it. Maybe the images are not that accessible for us, all of these things. So that's kind of the direction I would go there. And kind of look at it. I would double check this in your own Search Console account to kind of see if that matches what you would see as well. And if it's really more the non-branded queries, more the long tail queries that were going to your site and are less visible for your site, then I would kind of, on one hand, give it more time to settle down. And on the other hand, really significantly review the pages themselves to see what were they like before, what do they look like now. If you're not sure what they looked like before, there's archive.org, which is a free service where you can check a lot of URLs to see older versions of those. So you can check kind of the old versions that used to be shown in Search and see what they looked like and compare that to the current version that you have yourself. All right, we have a few thousand sites that show up in Search Console. Report, index, then not dependent in Site Map. I think sites probably is pages. Not sure. So in German, we tend to use the same word for websites and web pages, which makes things a bit confusing sometimes. But I think probably they mean pages. So a few thousand pages that show up in Search Console Report, index, not submitted in Site Maps. Most of them are non-canonical pages, which point to their respective canonical version via rel canonical. When we check one of these with a site query, it shows up in the search results. When we check them with the URL inspection tool, it says canonical URL provided by the user with the correct canonical and canonical URL selected by Google with verified URL. Why are these non-canonical pages indexed, although there's a clear signal with a canonical? The pages are almost 100% identical. So let's see. So I think there might be multiple things happening here. It's kind of hard to tell without some example pages. But in general, when it comes to canonicalization, we take multiple factors into account. We do use the rel canonical. That's a fairly strong signal for us. We also use things like redirects. We also use internal linking, external linking. We use site map URLs as well. Kind of the internal linking structure with things like hreflang, all of that plays a role as well. And essentially, we take these multiple URLs that show the same content, and we try to figure out which one of these URLs is the one that we should pick. And it can happen that we pick URLs that have a rel canonical on them pointing to a different one. So that's not impossible. If you're seeing this on a large scale, then I would try to figure out why Google might be doing this. So for a few individual pages, I wouldn't care about this. It's like, sometimes it goes this way, sometimes it goes that way. From a ranking point of view, it doesn't change anything. It's purely just which of the URLs are shown. So again, if you're seeing this on a large scale, I would try to dig into those situations and see what might you be doing that could be confusing Google. So that could be things like internal linking. Are you linking to the exact URLs that you have in the set for rel canonical? If you have something like hreflang, are you using the exact URLs there as well? Sometimes things like upper lowercase plays a role. If you have a sitemap file, does that match exactly what you have submitted as well? So all of these things kind of ideally would align. If they align, then we'll go with what you suggest, essentially. If they don't all align, then Google will try to figure it out and try to come up with a canonical version that works in general. And again, it doesn't change anything from ranking. It's just purely like you have a preference, and ideally Google would show that preference. And the other place where this plays a role is, of course, with tracking. If you use things like analytics, then you want to be able to track the individual URLs that you kind of care about. But that's kind of the direction I would go there and try to figure out what might be happening there. If you have URL parameters in these URLs as well, so something with the question mark, and then some, I don't know, words equals something else, then the other place I would also look is the URL parameter handling tool. Because it can happen that you have a setting there that you forgot where it's like Googlebot should ignore these parameters. And that's why those URLs are not being picked as canonicals. I'm looking at log files from the past several months. And about 75% of the 404s we send Google happen when Google requests very old embedded content. So we're talking about old CSS, JavaScript image files that haven't been on site for up to five years in some cases. I've seen URLs from a site we migrated three years ago in the refer. I verified the requests as genuine Googlebot. Did these kinds of requests happen when Google renders old pages? If so, why are you rendering really old versions of our page? It seems like a sign of something going wrong. Is there something we can do about this? So there's generally nothing you need to do about this. It's essentially just our rendering setup, our crawling and indexing setup, checking pages again. And when it comes to crawling, it's really common for us to re-crawl URLs that are kind of old, where we think probably there's going to be a 404. And if you return a 404 for us, then that's perfectly fine. It's not going to cause any problems there. So I think if you look at your log files in detail, you'll find lots of these weird quirks, but they generally don't cause any problems. I think it's kind of weird that we would re-render really old pages, but I don't see it as being weird enough that I would contact the rendering team and tell them to stop doing this, because usually they have good reasons to try to check things out. Sometimes they're experiments. Sometimes it's just double checking kind of how the old rendering worked versus how the current rendering works. Lots of reasons why that might be happening. But again, I generally wouldn't worry about this. If the number of requests is causing a too high load on the server, then that's something where you can adjust the crawl rate setting in Search Console to let us know to crawl less. And when we crawl less, it's not that we'll kind of crawl percent-wise less of all of the URLs, but rather that we'll focus more on the URLs that we actually do care about. So that's kind of an approach you could take if you wanted to go that direction. Our competitor is now making all the external links on their site with help of Gaskit page with JavaScript redirection. He linked from site A to site B, not directly, but through an additional page on the site with some kind of a JavaScript. Is this something like a link from Facebook? So the question is, do these JavaScript Gaskit URL links convey weight and signals? Does Google not perceive them as a real intermediary? So a lot of times we can crawl through those, and it's absolutely fine. We get to the destination page, and that works for us. So usually that's less of a problem. In general, if we can't crawl through them, then it'll look to us like they're just not linking to those pages. But I don't see this as being particularly useful for a website. So if a website just doesn't want to pass any signals through a link, then I would just use a no follow on that link. All of this extra complexity just means more things can break. So personally, I would avoid trying to go through all of these extra steps to try to hide links. If Google's search result page is intermittently ranking two pages for the same keyword, how can we help Google to rank only one of them? We have two pages that keep alternately ranking for the same term in the search result. And I wonder if the competitive competition is ultimately drawing down ranking. Both pages are important to the site, so we can't just delete or redirect one. So the cases where I have seen something like this happen is usually when we're unsure what the intent of a query is, so it can happen that this fluctuates over time. For example, we might not be sure if the user is searching for a specific product or a specific kind of product. And depending on how we would judge the intent of those queries, we might say, well, maybe a category page would be more useful because they're looking for this type of product. Or alternately, we might say, well, they're looking for the exact product, so the product detail page would be more important. And these kind of fluctuations can happen over time. It can happen that, I don't know, from day to day it changes. It might happen that with queries like we're so far on the edge that we might show both of these pages because we're just not sure what exactly the user wants. And from our point of view, that's usually normal in that it settles down over time, usually, or maybe it'll settle down in the sense of, well, we figured out we can't tell what the exact intent is, so we have to show both of those pages because one of them will work, but we're not sure which one. So that's something where, from my point of view, I wouldn't worry about it too much if you have really strong feelings about those two pages, then I would just make sure one of those pages is really well-targeted for what you're trying to aim for, and maybe the other page is really well-targeted for a different, clearly separate facet that people would be searching for. But for the most part, I don't see this as being problematic. Question about implementing schema markup for the future. Future-looking is always tricky. Can it be easier to maintain? Can it need less reliance on multiple steps and parties for extensible code? Can we make it easier to instruct developers and designers? Maybe make Google Tag Manager more efficient? I don't know. I think it would be nice to have it easier. But to be totally honest, I think in the future, at least in the near-term future, we will have more types of structured data markup. And it will continue to get more complicated probably in the sense that there are just lots of requirements for different search features. And in order to do something really fancy in the search results, where we can really highlight your site really well, or if there is a way to also include information that we can show on Google Assistant, or rather, I guess, speak on Google Assistant, then all of these things, they currently rely a lot on structured data markup. So that's something where, as these areas expand, I would expect to see some amount of additional markup coming in. And anytime you have more structured data markup, kind of like the interactions get trickier, and the requirements get a little bit harder in the sense that, oh, you have to have I'm just making something up. Like a name with 17 characters, you can't just have a name with 15 characters. Sometimes these things change over time. So I suspect it'll get harder in the near future, at least. Maybe in the really long term, it'll be like, oh, the machine intelligence can figure it out for you. You just write a text file, and everything else will happen automatically. I don't see that happening in the next couple of years. So if you're kind of on the hill with regards to should I learn more JSON-LD to figure out how to do markup myself, I think that's a good approach. I also think, at the same time, a lot of these things can be made easier by content management systems and plug-ins, extensions that you have for content management systems. So if you're using something like WordPress, then maybe there's a plug-in that does all of the structured data markup for you automatically. I know there are a bunch of recipe plug-ins, for example. So instead of creating all of the kind of special markup yourself, if you have a plug-in that does it for you, then those plug-in developers will continue to kind of stay on top of things, ideally, as the markup evolves over time and probably make that easier for you. But purely from a markup point of view, I don't see this getting any easier, unfortunately. We had two products. So the page structure was obvious, a home page, domain.com, with links to both products. Now we dropped one product, which is better, put the main product landing page directly on domain.com or redirect domain.com to the main product landing page. So both of these are kind of the same thing in that you're folding one product and putting it together with your home page. From my point of view, if you're reducing the scope of your website and really going to just a one-page website, then I would focus on the home page directly, rather than redirect from the home page to a lower-level page. If this is something that is very temporary in the sense that maybe next month you'll have another product to sell and then you go back to having multiple products on your site, or maybe next month you have 100 products to sell, then I would stick to the structure where you have multiple product URLs and the home page separate. But if you're really sure that you want to decrease the scope of your website to just one page, then put it on the home page. I think that just makes everything a lot easier for tracking and for crawling and indexing. The rich result update in September changed a lot of search results. I want to understand why for some home pages stars are still appearing. In my opinion, it could be the use of misleading markup or is there an actual issue with understanding markup technically? And will it be solved in the near future? So not exactly sure which update you mean from September. We've had a lot of things over time. But I think, in general, when it comes to structured data and the rich results, sometimes we do get tricked by people who are implementing the wrong markup. And then we think, oh, this should be showing review stars because we think it makes sense here. And actually, when someone manually looks at it, they're like, no, that's implemented incorrectly. Or another common one that we see is recipe markup where maybe you'll have a recipe for a fantastic blog post on your website. And that's not really a recipe. So our algorithms might look at that and say, oh, it looks like you have a recipe with multiple steps. But actually, it's a blog post. And it's definitely not a recipe. So when someone manually looks at it, they say, well, this is wrong. And from a manual point of view, we do try to take action on a lot of these. So if there's something where you're seeing sites showing up with rich result types that don't make sense for that site or those pages, then let us know. Feel free to use the spam report form. If you're seeing this on a larger scale, you can also drop me a note directly. Sometimes what I've also seen is that some plugins, like I said, they make it easier. But plugins can also be wrong sometimes. Sometimes the plugins are just set up incorrectly, which can result in us on a larger scale seeing an issue. And that's something where some people just send a note to the plugin maintainer and say, hey, you have this bug. You can fix it. And then your users won't be upset. So I'd use the web spam report form for most of these. If you're seeing things on a larger scale, feel free to drop me a note. If some URLs are blocked by robots text, is this still considered a waste of Google's crawl budget? No, if they're blocked by robots text, we're not going to crawl them. So it doesn't change anything. In particular, crawl budget is something most sites don't need to worry about. So for the most part, I wouldn't worry too much about that particularly and try to optimize for that, unless you're really a website with millions of millions of URLs. And sometimes small things on a website like that can lead to us suddenly having two or three or 10 times as many URLs as you actually have. And with a small website, that doesn't really matter. But if you have millions of pages from the start and suddenly you have 10 times as many pages, then that's where we really get bogged down with crawling. Now that deduplication is in effect, does a web page need to rank in position one to be a featured snippet? Before deduplication, I was under the impression that the results one to 10 were eligible, but mostly one to three earned the snippet. Is that or was it ever the case? I don't think that was really ever the case. So I'm not sure what all has been changing with the deduplication that you mentioned. I think this is with regards to the featured snippet and the organic results sometimes appearing on the same page. And some of the experiments we've done there to try to simplify that. But in general, we can pick up featured snippets from all kinds of URLs. It's not that they have to rank number for first. And just purely from a practical point of view, it's pretty common that we run across a page where we think, well, this is a really good page for this result. So we'll rank it first. But at the same time, it's hard for us to determine what would be a good featured snippet for this page, because maybe the structure of the page is a little bit tricky, or maybe the query is hard to match with the content on the page. And then it can happen that we say, well, this is a good first result. But we cannot pick any featured snippet from this page because we're just so unsure about that. So we won't. And then maybe we'll pick a featured snippet from one of the other results. Maybe we'll just say, well, we don't have a featured snippet for this query, and that can also be fine. Then the cookie consent thing. I'll try to see if I can find some more information on what variations would work well for cookie interstitials. So I also noted some folks on Twitter ping me about this, so maybe we can find something there. How does Google treat timestamps and URLs? For example, if you have slash month, slash year, slash today, would that be the same as slash month, slash year, slash yesterday? So when it comes to dates, we use various signals to pick up information about a date that is relevant for a page. But the URL can be a part of that. Sometimes it's a pretty clear signal that a page was published on a day if it has a URL like that. But we also have the, I think, in the article markup, you can specify a date on the page to let us know there as well, as well as kind of visible dates on a page and all kinds of other signals. And with regards to dates in general, is the question, I guess, is something that looks like it's from today better than something that was from yesterday. And the answer is no. It's not the case that something that is fresher will be considered better than something that is older. And that's kind of obvious when you think about it. There are lots of topics where the information has been published and there's evergreen information out there and there's really good reference material out there that doesn't change from day to day because nothing has changed there. And that's really good content that deserves to rank well in search. On the other hand, it might happen that maybe some recent event takes place where suddenly everyone is looking for something new on this topic. For example, if you have geographic regions, then maybe geographic background information for a country is really kind of useful as a reference material. But if something critical happens in that country, then people don't want that reference material. They want the news. They want the newest information. So in those cases, we probably would prefer something fresher rather than kind of the reference material. So it's not so much that fresher is always better, but it really depends on the situation and our algorithms change there fairly quickly when we see that something kind of new is happening. If domain A links to domain B would do follow links, but domain A later adds a no follow to the link. Does Google discount the link domain B got in the first place? So when it comes to links, when it comes to content in general, we essentially rank the pages based on the information we have at that time. It's not the case that we would say, well, it used to be better, but now it's bad. Therefore, we'll still treat it as something better. But rather, we base our decision on the current situation. So if a link gets changed to a no follow, then we treat it as a no follow. If a link gets changed to have a no follow removed, we treat it as a link without a no follow. If the content on a page changes, then we treat it as that new content. So all of these things are essentially tied to the current state of the web page, of the website, not so much to the previous states. And usually, that's a good thing, because that means that you can improve things. So if you have a website that's not ranking first for all of the queries that you care about, you can improve things. And over time, as we see that the new content, the new website deserves to rank better, then we'll rank it better. It's not so much that we say, oh, this website didn't rank well last year. Therefore, it will never rank well in the future. We have a site that ever since March, April, 2018, we've seen a big drop in traffic that hasn't recovered. So that's a really long time. I don't know if so. I think this might be my question. Oh, that's your site. OK. I think so, yeah. It was two sites on the same day. OK, go for it. OK, so yeah, back in April, it's when April 3rd, both our sites were doing really well. And then it dropped through the floor about 50%, 60% down on each site, two separate domains just overnight. So yeah, we've had people in. We've had technical audits. We've, as sort of journalists, we pride ourselves on what we do anyway. But I mean, we've doubled down. We've been cleaning up. We've been, I mean, the list, I've got 25 technical and content-based points. We've improved site structure. We've really been through everything, really, and nothing. No improvement, really. They're flat, both of them. It's eerily flat. And it's the same traffic every day since April the 2nd. So I don't really know where to turn now. I mean, we're talking about good pieces of content that ranked 2nd or 3rd, page 4. No, so it's not just a sort of just general distaste, or I don't feel like, oh, people are doing everything significantly better than us that we went from 3 to page 4. So yeah, we've lost staff. I don't really know where to turn, though. If you want, you can drop your domain in the chat, and I can take a look at that afterwards. I think, in general, if it's been so long since a change, then that sounds like our algorithms are pretty sure that we should be ranking the site like this. And I mean, changes can happen in our algorithms, so it's not completely impossible that we would see changes like that. But it's something where usually that means we think we're ranking it in the right place. And I can take a quick look after we kind of pause the recording to see if there's anything specific that I can point out, like, offhand. But usually that means that there's not like a situation where we're on the edge. We're not sure how we should rank it, but rather it's we're pretty confident that we should rank it like this. If you've had other people look at it, then probably from a technical point of view, it's OK. I know that Glenn Gabe sent you a note about this, because he's been through it, and sort of getting to head scratching time. I don't know. I can take a quick look after we pause the recording and see if I find anything offhand. But usually it's more a sign that we're kind of OK with the ranking that we figured out there. And these things do change over time, so it's theoretically possible that something in April happened where we're like, OK, we need to reevaluate how we rank some of these sites that we have indexed. The curious part of it is that the sites' rankings actually return for exactly one week in July and then disappear again. Yeah. I don't know. It's hard to say. I don't know your sites at all. So it's really hard for me to say, oh, it's a terrible site. It deserves to not be ranked. It sounds like you're pretty sure that it's not. I'd like you to take a look. And yeah, I see a lot of these sort of queries on the web, and there's always something sort of nefarious in the background. And as you say, you guys know what you're doing. But things have been running pretty good. And yeah, I'd like to see if you could take a look. That'd be amazing. Just I think. So I have a bit more time after we end the official hangout. And if you want to stick around, I can take a quick look. Yeah. I'd like to see. Yeah. Thank you. Sure. Maybe we'll just switch to more questions from any of you in the meantime. It looks like there is also some in the chat here. Let's see. I have a question about sites using heaps of microsites to link back to the main site. A client or competitor has over 100 of these sites. The site's linking over help prop up the rankings of the main site. Is this business breaching any Google guidelines at all? Anything I can do? So that just sounds like a PBN, which is kind of like the games that people play. If you want, you're welcome to send me, I don't know, the site in question. And I can have someone from the WebSend team take a quick look. Sometimes it's OK to have multiple sites. And they link together. Sometimes the multiple sites are essentially just there as a network to prop up the ranking of another site. And sometimes it's useful to kind of dig into those details. So if you want to drop me some URLs, then feel free to do that. In addition to the product question, I guess with different domains, I don't know. More questions from you all, maybe something live. Probably the easier. Nothing. Oh my gosh. OK. Then someone pointed at an hreflang question. Let me see if I can find that. So maybe scrolling down some of the other questions that were submitted. And like I mentioned, if any of you want to jump in, ask a live question. Go for it. Yeah. I was just looking for one here. OK. Arnie, go for it. So after December, I started getting some weird messages that my robots.txt was blocking important URLs. And in fact, it apparently was because it was not very supposed to do that. Those URLs were like our main URLs, which makes me think that probably somewhere in time someone dropped a disallow slash there or something like that. But the thing is, I went through all of our logs, all of our internal systems, and I didn't see anything changing our robots.txt. We also have robots.txt monitoring alerts that we basically pinged. And if it's different than we expect, we get an alert and we didn't. When I went to Google Search Console, old version, it allows me to see the previous version that Google has in cache, something like that. And it did not block any of those. What I've been doing is basically I've been just marking all of them as fixed. But I'm afraid that it might happen again in some time. And I don't know what could have happened here. It's hard to know without looking at the site. Double-checking the old versions in Search Console is a great way to start. I would try to figure out if this was a one-time event or if this is something that is ongoing. So in particular, if you submit those URLs with the Inspect URL tool, does it say that it's still blocked or is it OK now? They're all OK now. So if it was a one-time event, it could have been something as simple as the robots.txt file returning 403 for a brief period of time or returning a 500 server error for a brief period of time. Because both of those cases are situations where we say, well, we can't access the robots.txt file. Therefore, we won't crawl anything from the website. And usually what happens is the next time we check the robots.txt file, which is, I don't know, maybe a couple hours later or the next day, then we can crawl it normally again. And we get all of the normal content and we can crawl the website again normally. But during that period of time, where we're essentially blocked by the robots.txt file that we can't reach, then we would assume that all of the URLs that we would otherwise try to crawl are blocked by robots.txt. And we might flag that in Search Console. So if this is like a one-time thing, it happened once. And it was a fairly short period of time and just a bunch of URLs because we tried to crawl a bunch during that time. Then maybe that was just something weird that happened on the server very briefly. If you see this happening more than once, then I would definitely try to determine where this might be coming from. One place that I usually go to debug things like this is the server logs, and in particular, the robots.txt request. So filter out all of the robots.txt requests that you got, in particular from Google, and look at things like the response size and the response code that you gave back and try to see if there are differences in the sizes or if the response code has varied over time and use that as a way to narrowing down where things went wrong. Sometimes it's as simple as someone pushed a new version of the website and the robots.txt file was made inaccessible during that push. And we tried to crawl just during that short window. Then the next time things go, it'll be better. That might also be a sign that maybe kind of you're pushing things to production process could be improved so that it doesn't block Google when it tries to access the robots.txt file. OK, now I think what might have happened is we allowed Google to index a bunch more URLs in the beginning of December, and it had been hitting us really hard, like six times the usual. And it was overloading our servers, and we started returning four or two minds to Google. And after some digging, I figured that you received that as 503 or something. Yeah, yeah. So maybe that's just a message that was wrong. It's not that the robots.txt was blocking because you could not read it. Now we would differentiate between 503 and us thinking that a website is blocked. Also, if the robots.txt file returns a 503, then that's a temporary thing for us. Then we'll try to get the next version of the robots.txt file first before we stop crawling. But if you return a 500, then that would be a sign that the robots.txt file is blocked for us. OK, but do you treat 400, I mean, 429s as 500s? I don't know where we would treat 429s. I don't know for sure. Yeah, because I remember the only thing that I found around this was a post because of some Gary answer on Twitter that Google, that it said that you might treat 429s at the same way, meaning it will just slow down crawling. But when I saw here on my logs was that we were not returning any 500s to Google, only 429s. And at the same time, the number of server errors in our Google Search Console properties, they started increasing by tens, even though we were not returning 500s. OK, I don't know. I'll double check. That sounds like it might fit. Yeah, but I'll double check on our side to see how we would treat 429s. All right, thank you. OK, we're a bit over time, so I'll pause the recording here. If any of you want to stick around a little bit longer, you're welcome to do that. But otherwise, thank you all for joining. It's been great having you all here. It's nice to have so many people jump in even on short notice, so thanks for all of that. And I wish you all a great weekend. And a good start next week in the meantime. Bye, everyone. Thanks, everyone.