 All right, welcome, everyone, to another Webmaster Central Office Hour Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where people can join in and ask any question related to their website and related to web search, I guess. A bunch of things were already submitted. But if any of you want to get started with the first question, jump on in now. John? Sure, hi. Hi, you may remember last German Hangout, where I mentioned the content change of my small website, also having added article markup on about 14 pages. But neither new keywords nor structured data have been noticed by Google until today, neither in search nor in search console. Indexation of the pages is now more than 10 or 11 weeks ago. And I gave you two URLs to check the home page and an article page. Did you check them? Could you find anything? I don't remember. I need to double check. I can double check later on if you want to stick around afterwards when we have a little bit more time. Yes. OK, I will stay. OK, thank you. Sure. All right, any other questions before we jump into the submitted ones? So you maybe can begin with mine. It's there in the comments. OK, let's see. I think yours is the second one on the list. So I'll just start with the first one so that I don't lose track where I was. OK, so the first one is a multilingual international website targeting a specific country with URL patterns for the home page, like ksa.mywebsite.com slash ar, which has the hreflang value of ar-sa. So I guess that would be Arabic for Arabia? I guess. I don't know. I don't know my country codes. OK, and another one, slash en, which has hreflang value of en-sa. Knowing that Arabic is the default language for my website, does it help my website to rank higher if I implement URL patterns without ar prefix in the URL? So just ksa.mywebsite.com without the slash ar. No, it doesn't change anything. For hreflang, we look at the specific URLs and we essentially rank them as they are. So that's something where I try to keep it as clear as possible in your URL. So if you have different country or language versions and make it so that people can recognize that right away, and in general, I don't see any problem with just keeping it with slash ar and slash en for the different language versions. The country version, I think you already have set up as a subdomain, which is good, because for geo-targeting, the country level patterns, we try to look at clear either domain level or subdomain level or clear sub-directory level folders there. So that's something where you're using your subdomain, at least from the example URL there. And that would be perfect. So from that point of view, I don't think you need to remove the slash ar you can if you want, but it wouldn't change anything for search. Should we provide a list of domains that have been removed in our reconsideration request for the remaining ones? We received a manual penalty for thin content for many of our domains, and we killed a big portion of them. We improved quality on our high-value sites, and we believe they meet Google's guidelines. Yes, I think that generally makes sense. So in particular, if you have a lot of websites that are under manual action and you delete a lot of those and the process of deletion is kind of your way of cleaning things up, then that's something I would mention. I don't think you need to list them all out individually, but you can say something like, oh, we had, I don't know, 100 domains with a manual action for thin content, and we removed 90 of them. Kind of the rough order of magnitude so that the people who are looking at this to see if it was resolved, they recognize, oh, OK. You significantly took steps to clean that up. And does it matter how we kill the sites, no index, server error, or DNS error, or removal request? I would just do it in a way that is clear as possible. So a server error, a DNS error is something that's like, well, nobody knows if this is a temporary thing or not, but if you really take down the site, if people, when they check the URL, they clearly notice, oh, this website is gone. That could be a 404 page, something like that. Just make it as clear as possible that this is not a temporary thing, that you're just like, switch it off and see if it goes through. But actually, this is what you decided to do. OK, thanks. OK, the past three days, we're facing big latency in the indexing API, and a lot of 503 errors with the services currently unavailable. Is the service down? I don't know. So I just saw this question earlier before they hang out, and I'm not sure what the current status is. I'm not aware of any bigger issues there. So sometimes what happens is that things just get stuck for a little bit, but I'll double check with the team. And if there is an issue there, then we'll obviously try to get it resolved as quickly as possible. There are lots of schema.org types of structured data, but only a few of them are listed in the Google Developers Help. Should we spend time implementing something like image object? So I think image object is actually one of those that is mentioned in the documentation. In particular, we use images as kind of a sub-element of different other structured data elements. So I think it's, in particular, things like recipes, where you have images that you specify for that. My understanding is we use image object. I need to double check the documentation, but that's kind of my understanding there. But going back to the more general question, should we use types of structured data that are not in the developer documentation? So in practice, what happens here is we would not use them to do anything visible with your pages. So if you're using a type of structured data that we don't support, then you would not see any visible effect there. If it's, I don't know, maybe you have a car on your page, and there's a structured data type for a car, then just using that won't automatically transform your search result into something that looks like a car. Because if we don't have that type of rich result, then we wouldn't be looking at that markup. And if we did have that type of rich result, we would have that documented appropriately in a developer's guide. That said, all types of structured data help us to better understand the entities on a page. So in particular, going back to the car example, if you have something that uses a lot of words that could be interpreted as maybe an animal or a type of car, then that's something where if you tell us this is actually about a car, then that would give us a little bit more information and would allow us to try to rank your pages more appropriately. So it's not that you would rank higher, but rather we would try to show you in the queries a little bit more where we better understand your page and where we better understand that your page matches the user's intent. So with that in mind, if you're limited or severely limited by time, then I would focus on the types of structured data that are really visible. And if you do have a bit more time or if you have a CMS that's flexible that allows you to expand on things that maybe you're talking about, then that's something there's definitely no downside to adding more types of structured data. The one thing I would watch out for here is that you can easily get lost in the weeds with all of the different structured data types. So it's very possible that you mark up every other word on a page and say, well, this is this element and this entity related to this entity. And at some point, you spend more time on the structured data than you actually spend actually on the content, and that's not going to be useful for your website. So trying to find a balance there is something that I'd recommend. And if you're really limited by time and you really need to focus on things that have a clear result, then I would focus on the types that are documented in the developers guide. A few of my new websites and competitive niches have started to rank without building even a single link just with pure content. It took a while, though, more than two years. So I guess maybe those aren't new websites, but kind of older ones. Could I have saved precious time by building links from high-end websites to reduce this time period? Or it wouldn't have mattered at all. It would have taken the same amount of time regardless. So we use a ton of different factors when it comes to crawling, indexing, and ranking. So it's really hard to say, if I did this, how would my site rank compared to when I do this? So those kind of comparisons are kind of futile in general. In practice, though, when you're building a website and you want to get it out there and you want to have people kind of go to the website and recognize what wonderful work that you've put in there, then promoting that appropriately definitely makes sense. And that's something you don't have to do that by dropping links in different places, but you can get the word out in different ways. And by getting the word out, you're kind of bringing people to your website. And if they like what they see, then maybe they'll link to your website. And all of these things can add up as signals, and it can help us to better understand where your website fits in with the rest of the web. So from that point of view, I would not just create a website and put it up and don't tell anyone about it and hope that Google finds it and starts ranking it in competitive areas, but really kind of like a normal business, spend time to build that up and to build an audience and to understand what people like, respond to things, feedback that they give you, and really kind of build things up as you would with a normal business, essentially. How important is speakable and keyword rich? Speakable and keyword rich URLs for AMP pages. Our tech guys would like to switch to ID-only pages. For example, amp.example.com slash article slash 12345. And the canonical would still refer to kind of the speakable and keyword rich URL. As far as I know, this wouldn't change anything at all. So that's something where if you want to switch to more of an ID-based URL system, that's totally up to you. We do use words in URLs as a really, really small factor when it comes to understanding a page. But as soon as we kind of are able to process the content on the page, then those words in URLs are not going to play a significant role anyway. So from that point of view, it's not something where I'd say you're losing out on a lot of things by not having keywords in your URL, regardless if that's kind of your canonical web page or the AMP version of the page. In general, people see URLs less and less frequently, especially if you're on a mobile phone. It's really rare that you actually see a URL. So I think that shift has been taking place over time anyway in that it used to be people would focus on the URL and try to understand what this URL is before they click on it. But nowadays, you can't really do that. So I kind of see the importance of including a long URL with a description of the article as something that's kind of fading away a little bit, I guess. And that's really regardless of AMP page or web page in general. Is the last mod necessary for a new sitemap, or is this only for normal web sitemaps? So last mod is one of those things that we do take into account from a sitemap file. We don't use any of the other attributes. Well, obviously the URL. But any of the other attributes like priority or change frequency, we don't use those. We do use last mod, however, to recognize when pages have significantly changed and when pages are essentially new. But for that to work out, we really need to have a sitemap file that uses the last mod date in a reasonable way. So don't use kind of like today's date for all of your pages last modification date, because if everything changed today, then nothing is really that important for us to pick up first. So what you really want to do is to give us a clear hierarchy of the changes that you've made so that if we know that we last crawled maybe yesterday and we see these five pages are new, then we can focus on those five pages a little bit more. So that's kind of the direction I would head there. Use a proper last modification date and specify that for all of your URLs and really use this for the sitemap file regardless of the type of sitemap file. So if that's a new sitemap file, then you kind of have that limitation with 1,000 URLs for the normal sitemap file. It's essentially the same thing. Does a way searchers use branded terms in conjunction with non-branded terms affect ranking for non-branded terms? For example, if Google noticed a lot of surfers using the query xcompany plus widgets, would Google be more likely to rank xcompany for non-branded queries for widgets? I don't think it would work that way, essentially just because people are searching in a particular way for your website doesn't mean that we can say, well, if they're searching like this, then we should rank them for other things as well. However, if people are explicitly looking for your company, then that's a really clear sign that they really want to go to your company. So that's something which, from my point of view, always makes sense to try to find subtle ways to encourage people to do that. Because if people know your business, they understand your brand, and they explicitly search for your brand, then suddenly there's no competition in the search results because it's really clear they want to go to your website. And that's essentially, we call that more of a navigational query. And that kind of that shift from people are searching for widgets and going to, well, xcompany widgets, that's kind of a way of making sure that people go to your website. So it's not so much that kind of branded queries drive more traffic to non-branded queries, but rather if people are searching for your brand, suddenly you have a lot less competition because Google understands fairly quickly that this is your brand and this is your website. The caveat here is, of course, if your brand is a generic keyword, then that's not really going to work that easily. So if, for example, you named your brand Best Widgets, then if people search for Best Widgets, that's really unclear to us that they actually want to go to your website. Whereas if you have a clear company name, like, I don't know, xcompany, and you also sell widgets, and if they search for xcompany, then suddenly we would understand this query matches your website. And it is more of a navigational query than kind of an informational query. So that's something always kind of worth to keep in mind, especially when you're building up a new website, a new brand, that you pick something that people can remember, and that's kind of unique in the sense that when people search for it, it's easy to recognize that they really want to go to your website. It's not that they want to kind of get general information for these keywords that they enter. How important is internal linking in general? My client's current HTML site map takes five clicks before you reach a page with content. Outside of using the search bar, the only other way Google can reach these pages is with an XML site map. So my guess is with the current HTML site map, you mean kind of the current website in general, not a site map page that you would have. So in general, internal linking is really important. And it is almost the best way to understand the context of individual pages within your website. So it's a lot easier for us to understand this is the hierarchy of your pages. These are kind of the higher level pages, the more important pages, the less important pages. All of that is something that's really important for us to be able to pick up on. And it's something that is reflected in a lot of places in search. So just for example, something that I saw recently, if you have a really flat hierarchy in that you have all of your pages linked from all of your other pages, then we can't really tell which of these pages are more important than others, which can lead to situations like you have a site link on one of your pages that is shown right below the search result that points to a completely unrelated page. And from our point of view, we think, well, all of these pages are equally important, and they're all very related because they're all linking to each other. So taking any random page and using that as a site link is something that could be reasonable. Whereas if you have a clear hierarchy, then it's a lot easier for us to understand, well, this is the part of the website that belongs together. And this other part belongs together here. And kind of like this is the higher level part, all of this is a lot clearer to understand. So with that said, the number of clicks to reach a page from your home page, for example, is something that helps us to kind of understand how that page fits in. But it's not necessarily something that we'd say is bad if it's above three or four or five. So the important part is really that you have a clear hierarchy. And when you have that hierarchy, the absolute number of clicks is less critical. That said, if there is something that you really want to highlight on your website, and you kind of say, well, this is my favorite product, my best selling product, the product I make the most profit from, or this is something completely new that I really want to promote, then having that linked more prominently within your website definitely makes sense. Because then we can understand that you really care about this new thing or this unique thing here. And we can show it appropriately a little bit more visibly in search as well. So if this is some random product on your website, and it takes five, six clicks to get there, perfectly fine. If this is something that is really your best, most important product, and it's a very competitive niche, then link to that maybe. And with regards to linking from the XML sitemap, that's something that's almost kind of separate. Because from the XML sitemap, we can pick up which pages are new and changed. But it doesn't give us a lot of context about those pages. OK, after a DMCA notice, my client removed the content and also removed those URLs from Google Index. Still, we're receiving DMCA removal notification. How do I inform the complainant that my client has removed all of the content? My client had the rights to publish the content, but now their license has expired. I think someone is accidentally muting me. No problem. So with regards to DMCA complaints, the thing you would need to do is to respond to that DMCA complaint and to kind of say, well, I cleaned this up. I resolved this issue so that that DMCA complaint kind of goes back to the person who sent the complaint, and they can take that and say, oh, OK, it's resolved. I will cancel this. So that's kind of the direction that you need to take there. In general, if this is a single URL and you've removed it from your website and it's no longer shown in search, then having that DMCA complaint kind of pending or open isn't something that would affect the rest of your website. However, if this is something that happens on a larger scale in that you have a lot of DMCA complaints on your website and you kind of remove them individually and it's like, you're kind of taking them on step by step, then by having all of those DMCA complaints pending and still active, that's something that we might pick up in our algorithms and say, well, we don't really know how well we can trust this website. So if you get these kind of DMCA complaints on a larger scale, then I would respond to them and make it clear that this is resolved or make it clear that this is kind of an incorrect DMCA complaint so that our systems understand that there are no pending DMCA complaints there. Thank you for a moment. If you're reaching too many ads on your articles, the rankings of your website will get affected. Is that true? It's not so much the number of ads on your pages as kind of the general experience on the pages themselves. So that's something where we look at things like the above-the-fold content. And if the above-the-fold content is reasonable for your website, if there's information about those pages, then that's kind of fine. If you have ads that are loaded below the fold, or if you have small parts of the above-the-fold content with ads, then that's generally OK. And that's not so much based on the number of ads, but more based on how prominent are these ads. Are they taking up a large part of the above-the-fold content, or is this something that's like a small bar on top with has little, five little ads next to each other? That's something that's very different from one very large ad that takes up the whole, above-the-fold content. So, John, I have a related question. We have partner links with affiliates. And we're wondering if we need to add, or if we need to prioritize high, adding real sponsors to our links, or no follow is good enough. In general, no follow is good enough. If you can add the real sponsor to that as well, that helps us to better understand it a little bit. But it's not something that would be critical. So in particular, if you have existing links like that, I would just leave them like that. If you're setting up a new website, then I would just use maybe either real sponsored or real no follow sponsored together, just to make sure that it's a little bit clearer going forward. OK, thanks. Is it problematic to develop with React and implement dynamic rendering on a large site that is essentially static content, which doesn't update regular rapidly? Our IT department is advocating for this route in our upcoming redesign. We're concerned going the single page application route is going to over complicate a lot of things, including SEO. So it will get more complicated. I think that's generally almost always a case when you move from a static HTML setup to a kind of dynamic JavaScript-based framework. So that's something kind of to keep in mind. It will get more complicated. It's not that it's going to become impossible, but it will definitely become a little bit harder to kind of manage. It will become a little bit harder to set up properly. It will become a little bit harder to monitor your setup, to monitor your website that everything is working well, because it's a lot harder to use simple command line tools to just fetch the content of your pages and check that the meta tags are the same, all of these things. So that's something worth definitely worth keeping in mind in that you might save a lot of time on the development side or be able to create something really fancy on the development side. But you will almost certainly have to invest a little bit more on the SEO side to make sure that you're doing everything right. So that's kind of the, I think, the baseline situation. It's not that it's going to become impossible to do. And there are different ways of dealing with that, which could be as simple as, well, maybe you just pre-render all of the content and you serve the static HTML version either to crawlers or even to all users, depending on how you have that setup. But it is something where you'll have to look a little bit further than just like, oh, I will just switch it on or switch it over and everything will continue to work the same as before. So that's kind of, I think, definitely worth keeping in mind. I think it's getting a lot easier. It has gotten a lot easier, because there's a lot more information out there on what you need to do. There's a ton of documentation on our developer's guide on how to deal with JavaScript sites. A lot of the SEO tools nowadays, they can deal with JavaScript sites. But it is something where you have to spend a little bit more time, especially in the beginning, to make sure that you're picking a setup that is actually going to work well for your business, for search in general. Let's see. And one question about Discover, my website is in Swedish. However, my articles are not appearing for me and the rest of the Swedish people when you're connected to a Swedish Wi-Fi. However, when I connect with Wi-Fi outside of Sweden, my website articles are appearing as normal. The problem is that my target group lives in Sweden. So it started on a specific date, and it seems more Swedish websites have the same problem. I'm not aware of anything specific related to Sweden, but I'll double check with the team to see what might be happening there. I notice you have a .NET domain name, which is a generic top level domain from our point of view. One thing I would make sure is that you also have the geo-targeting settings set to Sweden if that's really your target audience. So that's something you can set in Search Console. It's a little bit hidden at the moment because it's a part of the old Search Console, but that's definitely something I would try to set up. But I'll also check in with the Discover team to see if they're aware of anything kind of happening there. Does a broad core algorithm update also affect Google Discover? A couple of weeks after the January 2020 update, we noticed a drop in traffic coming from that source, and Search Console also shows a dropping impressions overall. Yes, I believe we included that in one of the core update tweet series that these kind of core updates can affect Google Discover visibility as well. Should we have our sites automatically append our company names to the end of the meta-titles or leave it off and let Google do it? Right now, our company page automatically appends a company name to the title, and it's annoying because often the titles get too long and end up with ellipses. Yeah, essentially, that's up to you. So what happens on our side is we try to understand what the website name is in general. And if that's not included in the title of your pages, we may append that to the end as well. So probably what happens here is we would show them in more or less the same way anyway. In general, though, having the website name attached to your titles makes it a lot easier for us to understand which title to actually use. So that's something where if you give us this information, we'll probably end up using it correctly. If you don't give us the website name at all in the title, then it'll be a lot harder for us to figure out what you would like to have shown there as kind of a website name, company name in general. To handle our pagination, these are category pages actually, and they may contain over 100 plus pages. Do you think this is a good practice if we create page one, which lists our most important 30 products in page two and a very lightweight view all page, which might contain up to 5,000 unique product URLs? And page two would be canonicalized to, I guess, a view all page. So in general, this kind of goes back to what I mentioned before in that having a clear hierarchy on your website makes it a lot easier for us to understand your website in general. So if you have a category page with page one, which is linked here, and then page two, which is actually a view all page with links to all other pages, then we can still understand things from page one are probably more important. And something higher up in the hierarchy than kind of like everything else. But within this bucket of everything else, everything is kind of equivalent. And from our point of view, that makes it really hard to understand which of these are more critical to you. And it makes it a little bit harder to understand how things are related. So which of these products end up being similar to each other? It's really hard for us to tell. It kind of depends on your website, though. If this is within a set of different categories, and the first page in this category is already kind of really a good sample of the important products that you kind of care about in that category, then we have a lot of hierarchy just from those different categories already. And then the view all page, which links to everything else, is something that essentially is kind of connected further down in the hierarchy of the website. So that's something where maybe it doesn't matter that much. But on the other hand, it might also just be that if those view all pages aren't that important in general, then you might as well just use a normal pagination setup, rather than kind of this complicated one that has the normal pagination and then the view all pagination. You might as well just have like linking between the individual pages of pagination directly. Yeah. Go ahead. Regarding site hierarchies and overall architecture, so we're working with a fairly big retailer in the sense that they have like a few hundred thousand products. And there's about 1,000 categories or something like that. And one of the main issues I think they have is that most of those 1,000 categories are in the SiteWide menu, like in the header menu. So that's quite a lot of links on every single page. So from my point of view, on one side, that kind of makes all of the categories equal in the sense that there is no hierarchy for Googlebot because everything is SiteWide. Everything is linked SiteWide. And on the other hand, having 1,000 links on every single page might make it difficult for Googlebot to crawl everything maybe or assess where the page rank should flow more exactly. So what are your thoughts? I mean, I know what is the best practice, but what are the reasons why you shouldn't go with such a flat kind of architecture? Now, so I think purely from a technical point of view, we'd be able to deal with 1,000 links on a page. I don't see a problem there. So that's something where it used to be that we would just use, I think, the first 1,000 links on a page. But I think we dropped that 10 years ago or something like that pretty long time ago. So just from purely a technical point of view, that's not going to cause a lot of issues. But I could imagine from understanding the context of individual categories, it is something that we might have trouble with. So in particular, you might notice this with SiteLinks. That's kind of the place where I ended up noticing it because people flagged it to me, where if you search for one category and you have different SiteLinks below that category page in the search results page, and if those other categories that are linked there are totally unrelated and don't make a lot of sense, then that's kind of a sign that we don't really understand how these pages should be connected and which of these categories are kind of belong together. So that doesn't necessarily mean that you have to go from 1,000 categories to, I don't know, 100 categories, and you link to subcategories or something like that. But maybe there are other ways to make it a little bit clear that these things belong together, these things belong together, these other things belong together. So that's kind of the direction I would go there first to figure out, is it really a problem or not? And if it is a problem, then think about ways that you can split that up a little bit clearer. Well, one thing they do, they already show subcategories in the sidebar. Once you go to a parent's category, they show the subcategories in the sidebar. The only thing is that all of these categories are also in the main menu. So the main menus, again, something like 900,000 categories, obviously, that's not very useful for users as well. We're guessing, because we notice a lot of them are just using the search bar, the internal search, simply probably because the menu is so big. I was wondering whether I know that Google kind of splits main content from supplemental content and the menu kind of falls in that supplemental content. Does that mean that it's less of a problem to have all of the category links in the menu or doesn't really play a role there? I don't think you'd see a big difference there, because essentially what would happen is we would recognize the link from one category to another category. We'd see that in the top menu. And we'd also see it in the sidebar and maybe within the body of content as well. But it's not that we would add up the value of those individual links. It's just like, well, that link is already there. And you're not giving us a little bit, not giving us more information by also including it multiple times. So that's something where just piling on more versions of that link to create a category structure doesn't really work. It's really like, we already see it. And we know that link is there, even if it's in the header somewhere kind of with the main navigation. So because you see those links everywhere, that doesn't mean you can't really, as you mentioned earlier with the site links, you might not be able to kind of figure out, OK, so this is like a parent category that targets a topic. And these are subcategories that target specific parts of that topic and things like that. Would that create a problem? As far as I understand it, yes. So if we already know that link is there, then it's not that we're going to try to figure out, like, well, it's also here and also here to try to kind of guess at a hierarchy a little bit more clear. It's really like, give us a clear hierarchy, and we'll try to work with that. OK, cool. Thanks. Another one that sounds kind of similar, let's see. If page A1 has 20 outbound links, other than a sitemap, these are links through which Google can discover those 20 pages. If we put a canonical tag on A1 pointing to page A, would Google still be able to discover those 20 outlinks on page A1? So with the rel canonical, you're essentially telling us that these pages are equivalent. And from our point of view, if you tell us that these pages are equivalent, then we can just pick one of those and use that for indexing. And it might be that we pick page A1 as a canonical, even though you have a rel canonical point at somewhere else. Because for canonicalization, we use a lot of different factors. That includes the rel canonical. It also includes redirects, internal linking, sitemap files, all of these things kind of come together when we pick a canonical. So if we pick page A1 as a canonical, we'll know about those 20 links. If we pick page A as a canonical and it doesn't link to those 20 other pages, then we will not use those 20 links on page A1 as a part of A. So with the choice of the canonical, if we understand that these pages belong into kind of a set of pages that could be a canonical, we will only focus on the content and the links of the canonical page. Everything that's kind of on the non-canonical versions that we end up essentially skipping over for indexing, we would not take into account. So if these are supposed to be equivalent, then make sure that they're really equivalent. If they're supposed to be different, then let them be indexed individually. Googlebot now uses the latest version of Chrome to crawl, render, and index content. When there are discrepancies between the source and the rendered JavaScript, what factors does it use to determine the authoritative version? So if we render a page, we will use the rendered content only as a basis for indexing. So if during rendering, the content of the page is essentially deleted and replaced with a placeholder text, then we will only use that placeholder text if we index that page after rendering. So anything that was on the page before will be gone. Usually, that's less of an issue, because if you have a JavaScript framework, you're adding content to the page. So we would have kind of the existing content plus the JavaScript content. And that's what we use for indexing. That kind of makes sense. The part where this becomes problematic is if there's a conflict between the content that is on the page before and the content that is on the page afterwards. And one situation that we have seen, for example, is if the content on the page before it gets indexed has a no-index meta tag on, then regardless of what we place there with JavaScript afterwards, that no-index is always going to be there. So that's something where if you have a severe conflict between the static version and the JavaScript version, it becomes a lot trickier for us. Another common case is if you have a rel canonical on the static page before and you have a different rel canonical with JavaScript afterwards, then it's kind of tricky for us to understand, well, which of these ones is actually useful. And that kind of comes from us on the one hand looking at the static version before and then rendering the version based on that. So that's something where as much as possible, you want to make sure that those pages align. And if you add additional information with JavaScript, that's fantastic. If you add information that clashes with the existing information, then that's something where it gets a lot trickier. When does Googlebot take the rendered DOM snapshot used for indexing? We don't have a specific time. So it's not that there's a specific time out that happens for us to pick up the index version. We try to recognize when a page is ready. Um, whoops. Wait. Manasha, I think you're presenting. Let's see. Maybe I can turn that off. Now, so we don't have a specific time with regards to understanding when a page is ready. The main reason we don't have a specific time is that when it comes to rendering, we do a lot of caching. And we do a lot of steps to try to understand when a page is generally ready. And that can result in things like the timers on the pages being processed quite differently than when you process a page manually. So that's something where it's really hard to specify a specific time just because of the way that we render pages. So for example, if you have different JavaScript APIs that you're pulling content in and calling those APIs takes a second each when you run it in the browser, it might be that we already have those individual elements cached on our side. And when we render the page, we can render this page in a fraction of a second. So we might get the full version in a fraction of a second, whereas you might need to wait maybe 10 seconds to actually render the full page in your browser. And similarly, it can be the other way around in that when you access a page, you're able to get all of those requests in right away. And you can get that page essentially right at that time. Whereas when we try to render the page, it might be that we're limited by the amount of crawling that we can do for individual pages. And then suddenly it takes us maybe an hour to actually render that page. So those differences are things that kind of come out just because of the kind of different approaches that we take for rendering content for indexing versus what you would do if you were accessing a page with a browser. Hey, John, may I interrupt you with a question? Sure, go for it. So we have a client that is a retail client. They had some pages for summer collections 2016, 17, 18. And last year, they decided to make a non-year URL for the summer collection. They added redirect on all of these. But the problem is Google is still seeing these URLs. It's indexing them. If you try to access, the 301 works. Every tool I tested with sees the 301. In Search Console, when I inspect the URLs, it says that they are indexable. They have a self canonical. If I test the live URL, it still says they're indexable. But the canonical changes to the new URL that was added last year. Could you help in any way like pointing in what might be the issue here? So when do you see the old URLs? Is that something you see in Search, or is it showing in Search? So if I search for the brand and summer collection, I see those old URLs besides the new one. OK, in addition to the new one. Yeah, yeah, yeah. So you see like three for URLs for the same. OK. Usually that kind of cleans itself up over time. So that's something where if we see that there's a redirect, if we see that the rel canonical is pointing to the right version, then that's something where over time we will recognize more and more signals point to this version. So we'll only index that version. The thing I would watch out for is to make sure that your internal links don't refer to the old URLs. That's something that people often forget. Sitemap file, if you have hreflang annotations, all of those should also point to the new URLs. Sometimes you can use third party crawlers to just recrawl your whole website and make sure that there's no reference to the old URLs. And if you're sure that that's the case, then that's something that'll just settle down over time. So for individual pages, it's really hard to kind of force that to happen. It just happens naturally over time. And the thing to also keep in mind is if you explicitly look for the old URLs. So if you do a site query or if you do a search which includes maybe words from the old URL, then it might be that we would still show that in search even when we switched over to the new one as a canonical. So that's really common. For example, if you do a site move from one domain to another, if you search explicitly for the old domain, we'll show it to you because we're trying to be helpful by saying, well, this is the thing that you're looking for. But in general, if you've made sure that everything aligns, that everything matches the new version, then I would just let it run. OK, thanks. Sure. So, John, I have a question. I made a mistake shared my screen, but so for that. So we have a new site. And the site have something that appear like a technical bug. So maybe if it's fine, I will present my screen, but you try to explain it. So we basically, give me a second. We basically have a page that looks like appearing in Google. But while you're checking it, it's a new site. While you're checking it, even if the inspector you are showing that the site is indexed and looks like have the chance to get a rank, the results comes empty. So while technically it looks like you're inside and it's fine, in any way that we're checking, the pages actually doesn't appear indexed. You have, by chance, any idea why this kind of stuff can happen. We also try to escalate it to the community forum. And we don't really solve it. So one place where this could happen is if there is either a manual action or if you have a URL removal in place. That's something where both of these kind of could be similar in that the inspector else would say it's indexed, but it wouldn't show in search. Because from our point of view, these are basically just filters that happen on top of the search results. The page can still be indexed. It's just not showing in search. And both of them doesn't happen. OK. The other thing is that sometimes the site query, because it's a restrict rather than a comprehensive list of URLs that match from an indexing point of view, it can happen that something is just not shown in the site query. So it might be indexed, but maybe indexed in a way that from a site query itself, it wouldn't be shown. And depending on how you search for that page, it might be the same. So what you could do is double check in Search Console in the Search Performance section and filter for that specific URL. And then you can see, is it occasionally shown in search or not at all? No, it doesn't show in search. It's kind of, well, the indexed, but it's technically appearing. And it's a new site that suddenly got it dropped without any many election appeal in Search Console. It might just be that our algorithms are trying to figure out how to show the site properly. That's something that sometimes it takes a bit of time to settle down. It really depends on the website itself. It's also something where if the topic is something that is often used for spam, then it might be a bit trickier for our algorithms to figure out, oh, well, actually, it uses similar keywords to maybe other kinds of spammy content. But actually, this one is a good website that we should show in Search. And that's something that can take a bit of time to kind of settle down properly. OK, thanks. Sure. All right, any other questions from any of you? Hello, John. I have a question if you hear me. Sure. I assume that you will remember Ikea case, which is related to indexing issues with wrong reflank attributes. And Turkish Airlines is one of our main customers. And they complain from the same thing. They wonder that one Google can fix this problem. Which website? What's that? I remember that Ikea has having some problems with wrong reflank indexing status. You remember that? Google was indexing wrong pages about Ikea from different geographies and from different queries. And we have the same problem with Turkish Airlines. And our customer wonders that when can this problem fix? That sounds like something different. So because I think the issue that we saw with the Ikea website was unique there. So it would be really useful to have more specific information on what kind of problem you're seeing. So if you want to look around afterwards, maybe we can get to that side a little bit more. Or if you want to drop a link in the chat of some sample, I can get that. Give me an example. And we see that our pages are getting indexed in Turkish queries with maybe English pages. But we are using reflank correctly. There is nothing wrong about the code. And we assume that this will be related to the Ikea case. But then you have Turkish content and English content. Is that correct? Yes. We have different contents. And we are using reflank correctly. But Google is indexing them different. OK. Maybe you can drop some sample URLs in the chat. And I can pick that up afterwards. OK. OK. Thank you. Because these tend to be kind of unique issues. So the ones that we see more often are cases where it's the same content for different countries. So for example, if you have German-language content for Germany and Switzerland, it's exactly the same content. It not makes it hard for us, right? Of course, yes. Turkish and English content. And we should be able to see that it's different. But maybe you can give me some sample URLs. And I can check. OK. I will stop. Thank you for coming out. Sure. All right. Looks like we're kind of out of time. I still have a bit of time left. So if any of you want to stick around and chat off the record after the recording stops, feel free to do so. But otherwise, I'll pause the recording here. Thank you all for joining in. Thanks for all of the questions that were submitted. If there's anything else on your mind that we weren't able to solve here, or if any of the answers were confusing, feel free to drop us a note on Twitter or post in the product forums. Or I have a question for one of the future Hangouts. All right. Thanks, everyone, and wishing you all a great weekend in the meantime. Bye-bye. Bye. Bye.