 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts, where folks can join in and bring their questions all around search and websites. And hopefully, we can help a little bit. As always, if any of you who are here live want to get started with a question, feel free to jump on in. Yes, hi, John. So ever since the last update from August last year, I'm seeing a lot of people that try to remove their navigation and try to remove any sort of elements that can lead to other pages on their websites. So I'm not wanting to say everything because there are a lot of spammers that may be listening. But I think they try to retain something within that page so it doesn't flow to other pages. And they actually index their content through their sitemaps. So it doesn't matter if one page doesn't link to the other pages, but it's ruining the user experience. And for example, there are pages with very good content on them. So it's not about content farms or low quality websites, but they remove their navigation for a purpose. I don't know that. So is Google aware of that technique that's been abused a lot lately? I don't think that would be that useful for a website. So I think it's one of those things where maybe there are some misconceptions. But essentially, if there's no internal linking within the website, then it's really hard for us to crawl the website. And even with a sitemap file, it's hard for us to index that content properly. So I think there would only be downsides in search if you didn't have any internal navigation. So from that point of view, I don't think the user experience would be the first thing I would worry about as a site owner. If I remove the user experience, I think within search, a lot of things would just not work as well as they could otherwise with internal linking. Well, there are several competitors doing that. And after, I actually saw some tests on that field there. And I think that even some big names try to replicate the same technique without being too obvious. So they remove most of their navigation elements and leave, for example, only the category block. And they don't have any kind of search on their website. So it's really hard to navigate. And you have this 100 pages that you have to navigate one by one in order to reach out the correct page. And it's bad user experience. So I think that if Google puts the usability in terms of navigating through the website, it will be better because there is really a trend there. I know that trend and I know why they are doing that. So I think that Google should take a look at that thing for sure. It's not a myth. People are doing that now on a large scale. If you can send me some examples, I'd love to take a look because in general, if you remove the internal navigation, I only see downsides with the normal search things because we can't pass any page rank through the pages. And eventually, all of the pages get lost. So it's not something where I would see that as an advantage. So I don't know. I'm happy to take a look. Always curious to see what people are up to. All right. Any other questions before we get started with the ones that were submitted? Yes, I have one question. Hi, Joseph. Hi. I have one e-commerce website where in which Google Search Console earlier, the referring domain was like 450 for main domain and for a category page like, suppose one website is selling t-shirt and the category page is t-shirt for men, then referring domains for category t-shirt for men was like 100 plus when I was checking links in external links section in Google Search Console. But now there are no links for particular one category. And other categories are showing links in external links as well as internal links. But for one category, now there are no data. So it is any error from Google side. Like last week, we have re-indexing issues in Google Search Console. So I'm not aware of anything specific with regards to the links being problematic there. So I mean, one thing to keep in mind is we don't always show all the links in the Search Console. We try to show a representative sample. So depending on the website, that might be something that you're seeing. But otherwise, overall, I wouldn't really worry about this too much. I don't think it's that useful to focus just purely on whether or not Google has seen external links to specific pages within your website. OK, thank you. No, I wouldn't worry about that too much. Cool. OK, let's maybe get started with some of the questions that were submitted. If anything from your side comes up in between, or if you have any comments or questions to these things, feel free to jump on in. The first two are kind of similar. So is there a flaw in Google's algorithms? Maybe? I don't know. I think that's always possible. But the question goes on in that essentially they had a website that needed a brand change, and they did that twice. So now they have two old websites and one new one with all the same content. And then they removed and deleted all of the old URLs. But still, the old site is sometimes showing up in the search results. And the new site isn't showing that well. And the question is, could this be a problem with duplicate content? Or is this website doomed forever because they didn't know about duplicate content back then? So definitely the website is not doomed forever. I think the first thing to know, the tricky part here is it sounds like what you are doing is moving from one domain to another. And you are doing that not by moving, but rather by creating a new website every time. And from our point of view, what happens here is that these old websites, they've gained some value over time. But when you create a new website and you don't redirect from the old one to the new one, then that value is kind of isolated. And every time you start up with a new website, you're starting completely fresh. So that's something where instead of building up on the website that you've built over the years, where you've kind of built that brand up, you're essentially starting over fresh again. So it's not so much that there's duplicate content across these websites, but rather that you're not telling Google and you're not telling users that actually you have moved on to a new domain. So depending on when you made those changes, that might be a bit late to start doing that. Or if you don't have the old domain names anymore, then obviously that is also a bit tricky. But if you do still have the old domain names, then I would definitely set up a redirect from the old URLs to the new one so that any of the value that you've collected with the old domains over time is forwarded to the new ones. In our Help Center, we have documentation in quite a lot of detail on how to do these kind of site moves. So I would double check the documentation there and see which of those steps you can still do. And if you can't do anything, then obviously there's not much you can do. But the thing to keep in mind is even if you can't redirect the old URLs to the new one anymore, then over time your website will still kind of build up its value over time. And it will still be a usable website that we'll be able to show in the search results normally. So again, work on creating those redirects if you can. And otherwise, just focus on your website and keep building that up. It's definitely not doomed forever and not something that is unfixable. All right. The other question is kind of similar. We forked from a free Wiki service that had a lot of ads, and that was very slow to our own servers. We improved everything from speed to mobile responsiveness, and we removed the ads. And new pages, they show up fine in the search results. But older pages, they don't rank as well as they could. So the question is, what should we do here? And I don't have the domain name, so it's hard for me to double check what exactly is happening here. But in general, it goes into a similar situation as the other one, where if you're moving from one set of URLs to another set of URLs, then try to make sure that you can set up redirects from the old pages to the new ones. I realize in a case like this, that's not always easy, especially if you're splitting off a small part of a site into its own site. But redirects do help us quite a bit there. The other thing, again, similar to the other case, is when this is not something that means that your website is doomed, essentially, you can continue to work on your website and continue to make it better. And over time, that's something we will pick up and try to focus on. It's also not the case that the old site was a big site, and therefore it's ranking better, but probably more just that the old site was a site that was already well known, has a lot of signals associated with it. And if you're not passing those signals on to your new site, then you have the old site with all of the signals that we know, and the new site, and we have to make a judgment call, which one of these we should show first in the search results. Hi, John. Hi. I have a question now. A few days ago, one of our client asked us about his website. So he has an e-commerce website. They have a lot of products. So he wants to add a big category description on his category pages. Now, the problem is, the way he wants to add, he wants to add a little bit category description. Then he wants to add a read more button. When someone click on the read more button, more category description will come out. So if you're friendly, if we do this? I think some amount of text on category pages is fine. But if you're only putting that text there for search engines, then that doesn't seem so optimal. So I would, instead of hiding a lot of text on a page that you have to click on to expand, I would put the important text on the page directly. So it's visible right away. And if there are other things that you want to say, then that's something maybe you can fold away. But I would really focus on the user side rather than just kind of like I have a lot of text that I want to tell search engines so that they rank me better. And one more question about this. So the thing I've told you that read more button things, they actually got the solution from one of the SEO company. That is where the app starts, whether we can do this. So that, if your company also pulled them on your website, domain authority is not good. So I told them that this is not something which Google considered as ranking signal. So is it right or wrong? That's correct. You know that already. Yeah, so we don't use domain authority. Domain authority is a metric that's created by a third party company. I think it can make sense to look at those kind of metrics. And sometimes it's useful to understand where a site is shown in search. But it's not something that we would use directly in our search results. OK, thank you. Cool, let me see. The bunch of stuff also submitted in the chat. Question about internal linking. If the link already mentions in the menu bar and the navigation, can we add that link in the body text? And if yes, then what anchor text will Google consider? You can definitely put that link in the body of your page as well. That's really common. It's not always the case that there's only one link to one URL within a website. We're on a page. With regards to the anchor text, we try to pick the one that is the most relevant for that link to understand the context of that link best. So that's something where I wouldn't blindly focus on the first one on the page has to be the one that has the longest anchor text with all of the keywords on it. We try to understand the context of these pages and the way they're linked internally. And I think for the most part, that works out fairly well. I work for a finance website. We have 1,800 indexes. Most of these indexes don't get any traffic. Most of them are rootdomain.com slash category name page 123. Yes, John. It's my question. Hi. Yes. Hello. Yes. I work for a finance company. And we have more than nearly more than 2,000 indexes. And most of them don't get any traffic. And I want to delete the indexes because they don't create any traffic. And I wonder, is this harmful for my website or not for SEO? It might be OK. So one thing I would look at here is it's not just from our point of view, it's not just a matter of if these pages don't get any traffic or if they do get traffic, but if they're actually providing value. It can certainly be the case that you have pages which are so specific that maybe, I don't know, one person a month looks at them and says, they're useful, then I would keep them. If they're not useful, if they're pages that are obsolete, that don't have any good content on them, then maybe improve them or maybe delete them. And I can ask this question in an other way. Let's say I have a category page and I am indexing second, third, or fourth, and five pages of these categories. And I think the first page of the category should be indexed and the others don't need to be indexed. What do you think about this? I think that can be a reasonable strategy, too. So the thing for us that's important is that we can find all of the product pages within your website. So if there are products that are only linked on page three or page four of those category listings and they're not linked anywhere within the website, then that's kind of something we wouldn't find otherwise. So I can index these indexes and I can use the follow text, right? Follow but no index. I am. You can, but it's complicated. So I think someone else has a question about this as well. But essentially, what happens on our side when we see that pages are no indexed and especially if they're not linked well within your website, then we will assume that these pages do not need to be processed for indexing. So any links on those pages will also get dropped. So probably not for the second page, like the first one that is linked, if you have that to set to no index, but kind of lower down the chain, we will see there's a no index tag here. Maybe we don't need to look at this page anymore. Maybe we don't need to look at the links on this page either. It can affect my traffic, too. Yeah. So what you can also do is cross link the products. So if you have related products that are linked together, then we don't need to go through all of the category pages. We can go through the product and see the rest as well. Most of them are the blog posts, actually. OK. A lot of times they also have pagination where you can go to the next one. So maybe that also works. Maybe internal links can be better. OK. Thank you, John. Sure. Cool. All right. Let me jump to the next question here. It was great to meet at Brighton. Yeah, that was good. So I remember your website. So you asked about two websites that have the same checkout flow, essentially, like the same basket you call it. And the question is, would there be any negative effect from having the same checkout from for two different websites? And you have some more information here saying that the websites share a common header and footer so that they look like they're from the same company. Would that still be OK? From my point of view, that would be perfectly fine. So that's not something that needs to be completely separated. From our side, the one thing that we kind of watch out for is when it starts getting into situations where you have a lot of different content or a lot of different websites that are using essentially the same backend, the same setup. Because what happens then is it starts looking a lot like doorway pages. So if you have two websites and they're using the same checkout flow, that's perfectly fine. If you have 100 websites and they're all listing the same products and going through the same checkout flow, then for us, that looks a lot like a set of doorway sites. So that's kind of the main difference there. But again, if you have two, then that's perfectly fine. That's not something I would really worry about here. Two of my websites were badly affected by the search bug from beginning April. Search console shows that as many as 3,000 of our pages were de-indexed. 200 keywords fell and traffic took a hit. What could we do to recover from this? So I think there might be multiple things that are happening here. On the one hand, the de-indexing issue that we had, that should be long fixed in the meantime. I think we fixed that on Monday or Tuesday or kind of had the last parts of that fixed. So if you're still seeing pages that are not indexed anymore, the question looks like it was submitted yesterday. So if you're still seeing pages that are not indexed, then that would be from something else. So in particular, that sounds a lot like a technical issue. If you're seeing a lot of pages just dropping from the search results, and what might be confusing here is that maybe the time kind of overlaps with the issue on our side. So what I'd recommend doing here is maybe going to the Webmaster Help Forum and posting your URL on some of the pages where you're seeing problems so that others can take a look to make sure that there is nothing really obvious that maybe you're missing. Oftentimes, when you're embedded with your website, you miss some of these obvious signals that other people might see. So I try to get a second pair of eyes to look at these things and then kind of work up from where you're seeing the problems to ways that you can fix that. So things like using sacran files, perhaps, to let us know about these pages, making sure that the internal linking is working well, making sure that all of these pages are actually indexable, that you don't actually have a wrong setting somewhere, or a server that is perhaps blocking something on your side, all of those things. I am inserting schema using Google Tag Manager, which triggers on a specific data layer with a variable. This data layer triggers between the DOM and the window loaded. How do I know if the crawler can parse this dynamically inserted data? I can't see the schema in Search Console at all. So if you can't see it at all in Search Console, then I suspect that's not working the way that you'd like to have it work. So in particular, the structured data reports in Search Console should show the structured data that you're adding like this. In general, when it comes to Google Tag Manager, I tend not to recommend it because it's a fairly complicated setup that requires rendering for us to actually pick up the content. But I do know that people use it and that people seem to be able to use that in a reasonable way. So from that point of view, I think there are ways that you can make this work. I don't know if the specific setup that you currently have is set up in a way that would work. So what I would recommend doing is using the Inspect URL tool in Search Console. There you can do a live test for one of these URLs. And in the live test, you can look at the HTML source as well as the JavaScript console to make sure, on the one hand, that within the HTML source the schema markup that you're adding is visible. If it's not visible there, then that means we're not able to process whatever you're using to insert the structured data there. And the JavaScript console, the console log that you see in the Inspect URL tool, oftentimes can give you a little bit of information on where things are getting stuck. So you might see things like, I don't know, ES6 syntax that's being flagged because, for the most part, Googlebot is using an older version of Chrome and it wouldn't be able to process ES6. There might be other aspects that are being flagged there, such as maybe a robot or JavaScript or something else that is blocked from being processed. All of those things are more visible in the Inspect URL tool. But in general, you can use Tag Manager. I tend not to recommend it mostly because you add so much extra complexity there that makes it hard to debug cases where you're trying to do something specific and you don't really see that exact result in the search results. At Brighton SEO, it was suggested by one of the speakers that, over time, Google can stop crawling a no-index piece of content altogether and that this can have an impact on the internal links. This is, I think, that question that you had before. Will Google remember the relationship between these pages and the content on them if we can no longer assume that Google will still register followed links from no-index pages? What about the rest of our internal linking and the authority and author pages, all of these things? So I think, like I mentioned before, if we see that a page is no index for a longer period of time and we don't see a lot of internal links going to that page, then, over time, we will assume that this page does not need to be processed. And if it doesn't need to be processed, it's a no-index page, then we will drop it from our index and also drop the links on that page. So that's not really something particularly new. That's something that's been the case for a fairly long time. Sometimes you see these pages shown in Search Console as soft 404 pages, which is kind of how we categorize these internally. But essentially, that's kind of by design from our point of view, because if a page is not meant to be indexed, then at some point, we won't index it. And then the links there also won't be processed. With regards to internal linking, I would definitely make sure that the internal linking that you care about is on indexable pages so that we can go to those pages. We can keep those pages under index. We can follow the links from there to see the content that you do care about. So if there's something important within your website, make sure that it's linked appropriately internally from pages that we can keep in our index. I have a site that's been online since 1997. Can you check with your engineers to see if it's flagged in any way due to maybe low quality or duplicate content? The site just had a complete overhaul. So I did take a quick look at the site. Let me double-check. And from our point of view, we do see some issues with regards to quality there. So it's not that there's any manual flagging in place. Because manual flagging, you would see that directly in Search Console. But we are a bit worried with the overall quality of the website. So that's something where I would try to take a step back and maybe get some input from other users and see what you can do to kind of significantly take that up to a new level. So what I usually recommend doing is creating some kind of a small user study group where you invite a bunch of users, ideally some who are not associated with your website to your office or where you can kind of join them in a hangout or something similar, and give them some tasks to complete within your website and maybe across other websites in a similar niche. And then go through a bunch of questions with them to see, where are they getting stuck? What is their overall feeling when they come to this website? We did a blog post a few years back on things that you should look at with regards to high quality content on a website, some questions that you can ask. I would go through a bunch of those questions with them to see which areas within your website are maybe areas that you really like, but which users overall might not resonate with as well. So that's kind of the direction I would take there. Obviously, the technical things that you mentioned, like generating 404s and 410s, that's fantastic. Making sure that all of the internal linking works well, that your site has a good bounce rate. I think that can be useful for measuring things from your site to see, are there any obvious areas that you're missing? We don't use analytics for search, so it's not that that would have a direct effect, but it's useful as a webmaster to kind of understand what is happening. But I would really try to take a step back and see what you can do to maybe completely modernize things a little bit so that you're sure that you're staying with the times and that the current set of users are also still happy with your website. What are your thoughts on TF IDF keywords? Does Google use a similar mechanism? Should we make use of this to make our content better? So TF IDF is essentially a metric that is used in information retrieval. So if you're building a search engine, with regards to trying to understand which are the relevant words on a page, we use a ton of different techniques from information retrieval. And there are tons of these metrics that have come out over the years. This is an interesting one. My general recommendation here is not to focus on these kind of artificial metrics because it's something where, on the one hand, you can't reproduce this metric directly because it's based on the overall index of all of the content on the web. So it's not that you can kind of like say, well, this is what I need to do because you don't really have that metric overall. The other thing is this is a fairly old metric, and things have evolved quite a bit over the years. And there are lots of other metrics as well. So just blindly focusing on one kind of theoretical metric and trying to squeeze those words into your pages, I don't think that's a useful thing. I think that's very short-sighted thinking because you're focusing just purely on a search engine where you think that these words have a stronger effect. Instead, I would strongly recommend focusing on your website and its users and making sure that what you're providing is something that Google will, in the long term, still recognize and continue to use as something valuable. So don't just focus on artificially adding keywords. Make sure that you're doing something where all of the new algorithms will continue to look at your pages and say, well, this is really awesome stuff. We should show it more visibly in the search results. My website pages are not indexing, and it's been two months now. I have a WordPress website, and my developer created a single template, and they call it for every service page. Is this template stopping my website from getting indexed? So I took a quick look at this website, and on the one hand, there's not a lot of content here, so that will be a problem, especially even if we start indexing things. On the other hand, what I noticed is that these pages were no-indexed until just a few days ago. So with the no-index tag, I assume that's just a setting in WordPress that you had activated, we will not index these pages even though we can crawl them. So it looks like you removed the no-index. So I think you're on the right track. I think that should help to get these pages indexed fairly quickly now. However, you really, really need to make sure that you have some unique content on these pages, something that really matches what you're doing, the services that you're providing, kind of highlighting the value of your website across the web, and making sure that you're showing something there that is unique and compelling, and not just the same as other sites have. That's obviously easier said than done, but that's kind of the direction I would recommend there, especially since your website is targeting in kind of a webmaster, web design services, SEO. I think you really need to make sure that you kind of have that covered really well within your website so that you can kind of shine to any potential user who's coming to your business and wants to see what you actually do. Using a keyword finder tool, my website remains behind the competitor in the overall link profile score for keyword, even though the page authority, domain authority, citation flow, and external backlinks are better. The only thing we're marked down on is trust flow. Search Console says we have 15 external links to my website. Although some were established by previous web developers, they all seem to be relevant business directories. I watched the Google Disavow backlink video, which discouraged the process, like, I guess, the disavow links process. And unless you received a warning from Google and know what you're doing, what would you advise as the best way forward to improve our trust flow to overcome this issue? So I think, first of all, you're looking at a lot of things. So I think you're kind of on the right track in the sense of you're focusing on your website and looking at what could be playing a role there with regards to your visibility in search. And I think that's always an important first step to kind of assess the current state. On the other hand, you're looking at a lot of metrics that we don't use within Google. So things like page authority, domain authority, citation flow, trust flow, these are all metrics that are created by third party companies. They're not things that we directly use. They can be useful for analyzing your website and for trying to find places where maybe your website could improve in comparison to competitors. But again, these are not exact metrics that we are using. So if you see that your trust flow is not as good as some competitor, then I don't think it makes sense to kind of game whatever metric trust flow is using to improve the trust flow because you're improving the trust flow metric on a third party tool. And you're not really working on improving your Google search relevance, visibility, and the search results. So it's not to say that these external tools are looking at the wrong things, but blindly looking at external metrics can kind of lead you in directions where you're actually doing things just for those metrics rather than doing things for your website overall. So that's kind of one thing to keep in mind here. The other thing with regards to the links, it sounds like you have a start there with regards to some links are pointing to your website. And from our point of view, that's essentially what we're looking for. We need to be able to find your website, and oftentimes we find a website through external links. And at that point, it's not really a situation where you need to go off and create more and more of these external links, especially when you're talking about business directories, just like going off and submitting your website to a bunch of business directories generally isn't going to have much of a positive effect there. So instead, what I'd recommend doing is focusing more on your website, focusing more on your users, focusing on creating some content that users would find fascinating, something that other people might want to link to, other people might want to recommend, and really making sure that you're building this value on your website rather than that you're artificially trying to tweak individual metrics. So the importance or kind of the value of building things on your website is that these are things that you can keep for a longer period of time, even if something within these external kind of metrics or within Google's algorithms changes, if you're building something on your website that people remember, that people want to recommend to other people, and that's value that you can kind of keep for the longer term. So that's kind of the direction I would head there and see this more as kind of a long-term effort rather than I just need to tweak five things and then everything will be OK. Two years ago, I asked your thoughts about sites no following all external links, and you basically said these sites wouldn't have an issue ranking-wise because of it. This act seems to be increasingly more common. You mentioned that it makes it harder for you to find new content. Do you think your thoughts here might ever change? For example, it's unnatural to never link to another site with a followed link, or will Google just find new ways of relying less on the link graph? Very philosophical question, I guess. On the one hand, there are multiple things that kind of play into this, and I think you're probably looking at some specific cases with regards to maybe some bigger sites that have started using no follow more and more. From my point of view, I think it's still the case that overall, if you're not linking to other sites that's essentially up to you, we do look at a website overall to see how it's embedded within the web, and we try to figure out where we should show this based on how you're positioning your website in the web. But if you choose not to link to other people's websites, that's kind of up to you. It might be that other people choose not to link to your website, which is kind of an effect there that you might want to watch out for. With regards to finding new content, I don't know if that's really at a stage where I would say this is critical, or at least critical just yet. I think especially sites that are trying, for example, to get links from bigger news websites. These tend to be bigger websites anyway, where we already have a lot of information about the content. We can crawl their websites internally. We can find a lot of their content that way, too. So I'm not too worried about not being able to crawl the web to find new content. Obviously, sometimes it makes it a bit harder. With regards to will this ever change, I think, at least on the web, nothing is ever set in stone forever. The web is still all so new that these things can change over time. So that's something where I would never say that this is always going to be the case and it will never be any different, at least for pretty much everything. But I don't know. Also, with regards to relying on the link graph, we do look at a lot of different signals. So it's not just links that make it so that we can pick up content and show it in search. It's not that rare that I run across a question in the health forums or somewhere else where people are saying, oh, my site is not ranking anymore for my important keywords. And I look at it with the team and links are not a problem with that. It's not a factor of page rank or anything with regards to links being the limiting factor, but rather everything else around the website or on the website being more of a problem. Sometimes it's as simple as, well, you don't explicitly mention what you want to rank for on your pages. So sometimes that's that basic. But these are things that we do try to look at overall. And with regards to does Google rely more or less on links, I think that's really hard to say, and it really depends from case to case. It's not the case that we have a fixed weight and say this factor plays 10% of the role and this factor plays 10% of the role. And those 10% are going to be the same across all queries and intents and across all websites. These things vary quite a bit. So it really, I don't know, obvious use case is when something happens that's in the news, such as the recent fire at Notre Dame. If you search for Notre Dame maybe last week, you will probably would have found old articles about Notre Dame, maybe things that have acquired a lot of links over time. If you search for it in the last couple of days, you probably find a lot of articles that don't have a lot of links yet because they're so fresh and they're so new, but they're extremely relevant for this particular query. So even there you can see that the weights of the individual factors that we have, they can vary quite a bit. So that's also why sometimes when people ask me all of these detailed questions about links, I'll say, well, maybe you're focusing too much on links. I mean, we do use links in our systems, but links are definitely not the only thing. And they're definitely not the only thing that you really need to be focusing on, even if you're in a fairly competitive area. Let's see, another person from Brighton, that's great. It was a fun, fun conference. Here's my question. Is there any way to remove our competitor's misleading content, which appears in the top of the search results? So misleading, in a sense, they don't own the clinic in that specific location any longer as we acquired it, but they still own the page where the information is about that clinic, which ranks really well, despite anything that we're doing. I don't know. I'm curious to look at examples. So if you want to send me some examples of this, I'd love to take a look and look at that with the team here. But in general, our algorithms try not to focus too much on whether or not something is misleading, because that can be very subjective. So in particular, if there's a change of ownership situation, then sometimes it's really hard to say which of these should be shown, because the old owner was perhaps really the old owner, and their content might still be relevant. And the new owner might be the new owner. So it's like both of these things are factually correct and factually from people who are kind of running or who were running that business. And it's sometimes a bit tricky to pick which one of these we should show. And that can change over time as we see things kind of sewed down again. But it can also be the case that we kind of say, well, maybe this old content is so relevant that we should still show it, even though things have since changed in the meantime. But again, it'd be really interesting to see the specific case that you're looking at here, so that I can maybe take a look at that with the team here to see, is this something that we're totally obviously getting wrong, or is this more a matter of kind of an interpretation of like, well, both of these could be reasonable results, and we're just not picking the one that you prefer to have shown. All right. Wow, a bunch of questions in the chat again. We're trying to write software for our websites that immediately relay updates of the website to Google. This is especially important since it contains jobs, for example. Even though the indexing API is not officially available in the Netherlands, can we still use the indexing API? So in particular, for Google jobs content, you can use the indexing API. The indexing API works in a sense that you submit individual URLs from your website that have changed or that are new. And based on that, we'll try to kind of pick those up as quickly as possible. My understanding is it's limited to jobs content and some sort of live video content. And so if you have jobs content on these pages, that sounds like the right match. If Google, or is it Google Jobs or Google for Jobs, hasn't launched in the Netherlands yet, you wouldn't see that within the job search results, obviously. I don't know what the plans are for expanding within Europe or in general with Google for Jobs. I know they're trying to do that in a thoughtful way. So my recommendation there is you can use the indexing API for something like this. I can't promise that Google for Jobs will come to the Netherlands and be able to take that into account. But it's worth a try to be prepared for when it does come or if it does come. The other thing, obviously, is site maps, using site maps, using internal linking, and kind of submitting your content that way so that we can pick it up there too. All right, thank you. All right, question about crawl budget. We have some deleted content from three to four years ago. And when I checked from Kibana, I see Googlebot still crawls them with their sources like images, JavaScript, CSS files, and get a 404 status code from each one of them. This is hurting my crawl budget. And I wonder how I can prevent Googlebot from crawling these source URLs. Yes, it's my question. All right, so the good news is they shouldn't be causing problems for your crawl budget. Because what happens here when we see URLs that have been returning 404 for a longer period of time, we'll crawl them less frequently. And when we have a limit with regards to a number of URLs that we can crawl from your website, we'll focus on URLs that we know or that we suspect are good. So if we have leftover room and we can still crawl more from your website, we'll go off and check all of these 404s. But they're not kind of pushing the good content that you want out of the way. OK, but may I ask in another way? And how can I say this? When I checked from the Kibana, I see Googlebot comes these links repeatedly. And I think it may be more important. And I think I can use a 410 status code. It may work. I don't think that would change anything. So the 410 versus 404 is more a matter of if this content is still indexed. So if it's still indexed, then with a 410, we will remove it from the index a little bit faster. But once it's removed from the index, the 404 and the 410 are the same thing for us. So Googlebot still continue to crawl these URLs like four years more. Because we deleted most of them three years ago, and still it comes. Yes, we have a long memory. And it's not that these will be pushing the good URLs out of the way. It's more that, oh, these pages used to have some content. We will double check them maybe once a half year or so to see if anything new has come. And it has no importance, as I can see. OK. Thank you. Any webmaster have an issue of links dropped in their Google Search Console for a particular page, 100 plus domains, to zero domains, and external links, and internal links? I don't know of any issues there. But it sounds like you're the second person to mention that with regards to Search Console. So I'll double check that with the team afterwards. Good thing. OK, go for it. Yeah, like IE e-commerce website, and today's 18th of April, and Google is showing cash debt is like 28th of March and 30th of March. And on the other hand, when I am updating meta-titles and descriptions on the pages, they are showing in ACRPs, updated meta-titles and updated contents in ACRP. But when I'm checking the cash debt of the page, then it is showing 28th of March or 30th of March. So there is a lot of gap of 18 days. So may I know the reason? That can happen. So the cash page is not exactly what we use for indexing. So sometimes the cash date is a bit out of sync. Sometimes it's a little bit delayed. I could imagine that maybe the cash date currently is a little bit delayed because of the indexing issues we had before. But the cash date doesn't mean that that's the last time that we looked at those pages. So like you said, you're seeing the new changes in the search results, so I think you should be OK with that. OK, thank you. All right, let's see what else we have that we've submitted. Or if any of you have any questions more, feel free to jump in. Yes, can I jump in? Go for it. So I'm optimizing for speed for several clients. And we always reach for the green scores on the Lighthouse tests. And what we noticed recently is that because we use third party tools like GT, Metrics, Pingdom, and webpagespeed.org, they were all reporting better speeds after the optimization. But when we add Cloudflare to the equation, we have better speeds in these third party tools, but lower speeds and lower scores in the WebDef portal. So I assume that if you use speed and these metrics in future or currently for rankings, you will use your own services, not third party tools. So that Cloudflare thing, despite having real impact at these making things better, it's actually making things worse for the Google services. I don't know. I'd have to take a look at that. And I've seen other people reporting that as well. It's maybe related to the way Cloudflare throttles, the way Google bot or the Google services tests things out. But I'm not sure. But it's always, in most situations, it makes Google to understand the site as a slower one after adding the Cloudflare into the equation. So I think the tricky part is sometimes we flag things, especially in PageSpeed Insights, that are technical things that you could be doing but which don't necessarily map one-to-one to an explicit speed change. So especially if you're looking at the score, that's something where you could gain the score and try to get that score to 100. But it could be done in a way that doesn't necessarily reflect a visible speed change for users as well. So that's kind of one thing to keep in mind. The other thing is when it comes to speed, we look at a variety of metrics. It's not just the PageSpeed Insights score. We try to look at some lab data that we can recreate as well as some real world data where we see from users how users are seeing the site. So if users are seeing the site as working really well and if most of our lab tests are working out really well, then I think you're on the right track there. So I wouldn't artificially go away from a content delivery network just because you're seeing one kind of score within PageSpeed Insights that's affected negatively. So especially if you're seeing visible changes with regards to speed using other tools, then I would go for it. Well, I'm looking at the time to be interactive, et cetera. So it's not only the abstract score from this 100, but also the actual speed measured by the WebDef portal there showing slower speeds there. Now, I don't know why time to interact with would be slower from that point of view. That might also just be, I don't know, like a measurement issue that changes over time. OK. Cool. All right. Let's see. There was a Cloudflare question. Self-referencing canonical tags. Is that a best reference or not best practice or not? Yes, that's perfectly fine. Especially if you have static HTML pages, you can't control redirects maybe from parameter URLs than self-referencing canonical tags. They really help us to pick the right canonical URLs there. Yeah, I think that's pretty much it. A bunch of smaller questions as well. In general, if there is still something on your mind that you really need to get an answer from, I'd recommend maybe jumping into the Webmaster Help form. Lots of really knowledgeable and helpful folks are active there who can give you some tips on a lot of the more common questions that you might run across. So if there's anything that I missed that you really want an answer on, feel free to try that, or maybe drop it into one of the next Office Hours Hangouts. Yeah, I think in the chat, we're mostly finished as well, except the question on backlinks. Is Google doing anything to deal with irrelevant backlinks? For example, a page on site A has a backlink from Wikipedia or a good authority website, and now that page doesn't exist. Or is a different page altogether, but site A still gets rankings because of such signals. So I think, first of all, as far as I know, most external links from Wikipedia are also no follow, so that would not have an effect. But in general, if a link comes to a page that is now 404, from our point of view, that would be something that we would drop in our link graph. So we wouldn't apply that to the rest of the website, primarily because if someone was recommending a particular page within a website, and that page no longer exists, then they're not really recommending anything on that website anymore. So as a website owner, if you're changing URLs, then make sure you're using redirects so that you tell us that this page has moved. It's not disappeared. It's just moved. Then we can forward those links. But if the page no longer exists, then the page no longer exists. And the other thing that I would keep in mind here is that we use a lot of different signals for crawling, indexing, and ranking. So just because you see some links going to one website doesn't necessarily mean that those links are the reason why we would rank that website. So kind of like I mentioned before for some of the other questions, there are lots of things that play into this. And links play a small role there in recognizing this content. But it's not the case that this is the only thing that we would worry about when it comes to ranking. So from that point of view, I wouldn't focus so much on these links that your competitor is still having, but rather try to find a way that you can improve your website on the one hand to significantly kind of show search engines and all users how fantastic it is. On the other hand, maybe also create some content that people really, really love that they want to recommend to other people. So instead of saying this competitor still has this link and I don't think that's OK, look at your website and say, how can I build out my website so that people recommend my website more so that I don't have to kind of like try to remove links from other people's websites? Obviously, that's easier said than done. So I hope you find time to kind of dig into that a little bit. OK, I think time-wise, we're pretty much at the end. I'll set up the next batch of Hangouts probably later today. Not quite sure with regards to timing how it will end up. I'll try to make sure that I also have a German one in there because we've kind of missed out on those, but definitely get some of those in as well. All right, thanks all for joining. And if you're celebrating Easter, have a good Easter. Otherwise, see you next time. See you. Bye, everyone.