 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst at Google in Switzerland. And part of what we do are these office hour hangouts where people can join in and ask their questions around their website and web search. Bunch of stuff is submitted already. But if any of you want to get started, feel free to jump on in now. I have a question. I have a question about dynamically loading content. So if I have a really long page, like a long article, so it's going to have a lot of text images and some ads on there as well, are there any best practices for dynamically loading that with using JavaScript while making sure that Google can properly crawl and render that content? I mean, start crawling and index it. Yeah. I guess this kind of goes into the topic of lazy loading and infinite scroll. In that regard, so that's something where, I guess, first of all, you kind of have to make a decision on whether or not you want that content indexed, or if it's critical for your site to have that content indexed or not. Sometimes when you use lazy loading or infinite scroll, you're kind of adding content from other pages, so it doesn't necessarily need to be indexed. But if you do decide that it does need to be indexed, we have in the developer documentation some information on how to implement lazy loading in a way that works well for search. And I would just double check that. So I think, in general, what happens when we render the page to process the JavaScript, we render it with a fairly high viewport. And basically, we see what gets loaded into that viewport. And when we see that everything is finished loading, we use that for indexing. Whereas if you have code on your page that watches out for specific events that take place, like someone is scrolling down to search position on the page or clicking a Read More button, then probably those are things that won't get triggered. So that's just very roughly kind of the difference there. If it's loaded on page load, probably will index it. If you have to do something special to make it load into the page, then it's very likely that we won't be able to index that. OK. So I reviewed the lazy loaded documentation on it, and it suggests basically using pagination. So I guess the better question would be, is there any way to do that and have the content all be able to be indexed without using pagination? Yeah. I mean, if you want to take the content from kind of multiple pages and all have it on one page, you'd almost need to load that in the initial page load. So that's something, yeah, it's kind of hard to work around that. OK. And then you mentioned that it will just crawl using a really little tall viewport. So does that mean that if the pagination is viewport, so does that mean that if the content is so long that exceeds what the Google Maps viewport would be, then the content left out at the bottom would just not be indexed? No, it's not that it wouldn't be indexed. It's just that we wouldn't use that to trigger kind of additional loading. So I think it uses like the, what is it, viewport observer or something like that. I forgot what the actual name was. To kind of trigger the code on the page is say the page is now this tall. And if your code is waiting for a sign that the page is now like a little bit longer than that, then that's something we won't trigger. It's not that we'll stop indexing at that viewport height. It's just that's what we'll use to kind of trigger the JavaScript on the page. OK, OK, so if the trigger doesn't fire, then essentially that content also wouldn't be indexed then. I see. OK, and what's the best way to check if the content is being properly indexed? I have been using the mobile-friendly tool. I don't know if there are any better ways to do it. You can't see the full size of the page in the screenshot. So I'm not super clear if the content is being indexed. Like right now, I have all the content in a JavaScript variable on the page, which I do see. It's on the page source, and I do see it when I test it with the mobile-friendly tool. But I can't tell if that means that Googlebot is actually reading the content inside the JavaScript variable or if it kind of ignores it. Yeah, so what's the best way to check that? What I usually do is just take a piece of that content and search for it in quotes, like one sentence or like four or five words, and just double check that it actually shows up in search. And if it shows up in search, then that's kind of the ultimate proof. With the testing tools, you can use inspect URL to look at the fully rendered HTML page. But it's sometimes a bit tricky, especially if you have a code in JavaScript as well as on the page, like what actually are you finding there? So usually waiting until one of those pages is indexed and then just double checking if it actually goes down that far is kind of the, I guess, the most secure approach. OK, so I actually did try that. And I get kind of mixed results because it seems like I can see where the text stops appearing in the search results, so approximately where the page might not be indexed anymore. But then there are images below that point in the page that I can find if I do the site command search on Google image search. Like what would be the possible explanations for an image being indexed lower down on the page but the text in that same area not being indexed? I don't know. Hard to say. It kind of depends on how you have things set up on your site. So one of the differences also with regards to images and the textual content is we tend not to reindex images frequently. So if, for example, one time out of 10, we can crawl the page and find the link to the image, and then we'll pick up that image and put it into image search. Whereas if the text changes from time to time, then we'll always have that version of the current version for text. So the image might be a little bit more stable because just because we just don't refresh it that frequently. OK. OK, that makes sense. All right, thank you very much. Sure. All right. Any other questions before we jump in with the rest? John, I have one regarding expired products on e-commerce websites. So we're working with the shop that manufactures its own products. It's a jewelry shop. Products are unique. They make one of each. They sold out, sell out, and then that's gone. They don't usually make the same products again. They might make similar products, but not the identical same product. The thing is, once they go out of stock, they still do seem to get a lot of traffic from Google Images. So with jewelry, people are looking a lot in Google Images, and it's kind of an emotional purchase. They click on it just because they really like what they see. And the site employs a section that this is not in stock anymore. Here are very similar products that you might want to purchase. So is it kind of a good idea to keep those products, even if they're out of stock, in this case, to keep them active so they still get traffic from Google Images, rather than either for affording them or redirecting them to something else, a category or something like that? Is there any downside to that? Yeah, I think in a case like that where these are kind of unique products and the representative of the business or of the website overall, that seems like something that I would try to keep somehow. And that could be that maybe you have an archive section of your website or similar products that we've sold in the past kind of thing just to keep that kind of content available so that people can browse that. Because it feels like that particular thing is out of stock, but it's representative of the kind of work that you do. And that might still be useful to actually have some of that index. With regards to how you keep that index, I would try to move that more into an archive section or kind of things that we've done in the past references, something like that, rather than keeping a product page live where it says this product is out of stock or you can't buy this product anymore and it was a one-time thing. And here's a photo of it. Now, because those kind of out of stock or not available pages generally tend to end up as soft 404 pages. And when they go into soft 404 state, then the landing page disappears from search. And with that, the image is also kind of lost. So if you have the image on kind of an archived page that remains for the longer time, then that image can remain in image search longer than if the product landing page ends up dropping out of search. Now, I see. So in that case, they have products that have been out of stock for a year or two, something like that, and still seem to be indexed and are getting traffic. Should they start to begin to think about restructuring that using that? Or once they see that those pages start to get de-indexed, maybe that's. Yeah. I mean, with soft 404s, it's sometimes tricky. Because obviously, if there were a clear sign saying that this page is 404, then we would just be able to drop them out. And sometimes, or in general, we try to recognize text on the page that's telling us, well, this is no longer available. So that includes things like search results, where it's like no results found, also products in general, where it's like this product is no longer available. If those product pages are created in a way that don't have this kind of not available text on it, but more like we sold this to this price kind of thing, then that could be something that could remain. But in general, if it's a product page and you have a text on there saying this is not available, I would assume at some point we will notice that as a soft 404 and stop dropping those. Right. Well, it's currently just out of stock. So I don't know if Google perceived that in any way. Yeah. And if we include it in the structure data markup and it shows up out of stock, I think that's a good user experience as well, because people who really understand it's out of stock and don't want to visit it, fine. But people who might want to see similar products, they can go in and see it. But it was just curious if there's any significant disadvantages of leaving like a lot of these products, pages indexed overall. My general feeling is that I don't know how many of these products end up being there. But it feels like it's very easy to create a mass of pages that are kind of low quality-ish and that don't tend to attract a lot of attention. Whereas if you were able to concentrate some of those existing products and put them on a gallery or references or kind of an archive page, then that content could collect a lot of value over time and still be relevant. And you wouldn't have all of these hundreds or thousands of other pages that are essentially like if people go there, they can't buy what they wanted to buy. But if we're talking about, I don't know, 20, 30 pages, then it's like that doesn't really matter either way. But if these are really thousands of pages every year that end up going into this state, then that feels like a lost opportunity. And in cases those pages do remain indexed, does Google look at maybe how well the overall user experience, how easy is it for people to go to similar products or something like that? If they see a product out of stock, how the design and everything works towards still recommending them a very similar product or something that they keep the user on the side and satisfied with maybe some different product or a similar price that looks. I don't think we'd have anything direct to go in that direction. But indirectly, that might be something where if people land on these pages and they're like, this is terrible. I wanted this, but I can't get it here. I'll see if I can get it somewhere else. Then that's something where you end up kind of losing that potential recommendation from someone. OK, so as long as you don't frustrate your users and keep them satisfied, that should be kind of the end goal. Yeah, yeah. I'm still, I don't know, kind of cautious about the approach overall. So that's something where I, especially depending on the number of products that you're talking about, I would tend to move those more into kind of like a persistent gallery or references type section just to make it clear to people and also to make those images a little bit more, I don't know, have a little bit more weight in image search as well, because over time, we'll see that this gallery section collects a lot of value. And then that could tell us, well, actually, these images are really important. Whereas if you have individual kind of lost pages within the websites, it could be trickier with competition. Yeah, yeah, that makes sense. I'm mainly asking, since I've noticed the really large fashion retailers, I'm not going to name anyone. But the really large ones do kind of keep out-of-stock products for seems like very long periods of time. So I was wondering whether there's any official way to do this or not. I'm assuming with our smaller, this is a smaller shop, so a few hundred products. So it's easier to kind of create that kind of gallery. But with very large retailers, I'm guessing that might be a bit more of a challenge. Yeah, I think that's something also where some sites have kind of policies where they say, well, we try to keep it on the website for at least a half a year, and then afterwards we just remove it kind of thing. And you can play around with different variations of that. And depending on what kind of products you have, what people are searching for, that's something where you can get some value out of it. Even if you currently don't have it on stock, maybe you have it in a different color or a different brand or whatever, those are all variations, yeah. Cool, thanks. Cool. All right, so I have Brittany's question on top. I don't know, do you want to go into it briefly or should I just read it? Sure, I can jump in. So we did a domain migration at the beginning of August with pretty much entire traffic loss. In the past few weeks, we are at about 60% traffic loss. So more broadly, asking for updates and advice. But one specific question I have is we've been working on a CMS rewrite that would allow us to render our content server side. Since right now, we render a lot of our content client side. We've been having some issues with CLS, some UX issues. We're dependent on pre-render. So I think this would be a great investment for our users. But we are a little wary of making such a big back end change when it seems like Google is still really not fully processing our domain migration and passing all of the signals from the old domain to the new one. So I was wondering if you had thoughts on whether this is, if it's too risky to do such a big back end change right now or if we should just be full steam ahead or how we sort of make these decisions while we're in the stage that we are. Yeah. I mean, it's always frustrating to hear cases like that where essentially domain migration where usually they kind of go smoothly or it just doesn't work out. So I've been pinging the engineering teams about this every, I don't know, pretty much every week just to make sure that it stays top of mind. But I can't promise anything. I don't know what the final state there will be. But in general, with regards to further updates on the website, I would just keep working on it. I don't see any reason to kind of wait and not make changes. The one thing I would not do is another migration kind of change, which probably you're kind of healed from that anyway, but that's the one thing I would try to avoid. But changes within the website, that seems like a perfect thing to do. One thing to keep in mind is if you make changes within the website that result in the URLs changing, then you're also kind of redirecting. And it's a little bit harder to track what was there before, what was there afterwards. But if you're able to make those changes with the back end without changing the URL structure of the website, then that's basically just an improvement overall. And that feels like something totally unproblematic to kind of work on. And especially if you can improve the usability, if you can improve the speed overall, those are always good things. OK. I don't know if you recall, but when we started on this domain migration plan, the idea was we have two fairly similar products. We wanted to take our flagship product, get it on the domain that we really wanted to move to. We thought it was a fresh domain. And then take a weaker product and merge it in. So long term, we are hoping to stay on the domain we're on. We're obviously hoping to see some more recovery. But we're also hoping to merge in our weaker domain that covers a similar topic. So would rebuilding pages on our new domain and redirecting, would that be in that area of URL changes that you're saying we should be wary of? I think that would be fine. I'm adding more content, essentially moving some further content from another site in there. I think that would be fine. I don't see any problems. It's more trickier if you take your top pages and you change those URLs. And that means you change all of the internal linking structure of the website. That gets kind of tricky. But if you're adding more content, that feels like a perfect move. Kind of keep building on what you have there. OK. And one more question I'll sneak in. So technically, we're still in that period where Search Console says we can cancel our migration. We don't want to. We want to see this through. But we're not removing any possibilities at this point since this is having such a dramatic impact on our business and ability to operate. Yeah, do you have any thoughts on when or if we should consider reverting the domain migration or if that just at this point seems as scary to you as it does to me? Yeah, I would try to avoid reverting because that, in general, that just makes it trickier. Because then it's not so much that everything goes back to the old domain and it suddenly just becomes just as visible as before. It's basically you're taking the current state and then moving that to a different domain. And you're kind of moving with this weird mixed state at the moment. So I don't think that would make anything better. OK, thank you. Sure. All right, let me go through some of the other questions. I wanted to get your thoughts on a project that I'm working on. I am separating one website into two websites as the business now offers two totally different services. So far, I've replicated the pages that need to move on the new domain, applied the rel canonical tag to those pages on the former domain to the new location. Once Search Console and the search results start recognizing the new location for those pages, the plan is to apply 301 redirects. And let me see where this is going. Long question. Is there anything that you would recommend to kind of minimize the risk of losing damaging visibility? So in general, when you take one website and you split it off into two websites, it's not something that we would see as a domain migration because you're essentially generating a new state. You have kind of the previous state, this one big website that you have, and then you're generating two separate websites out of that. So it's really hard to say ahead of time what the final state will be with regards to Search. But in general, if you are able to move things on a per URL basis, if you're able to kind of fix the internal linking so it works well within both of those websites, then that seems like something that should be fairly straightforward. And I think the approach that you're taking here, where you're generating these two websites and then using rel canonical to kind of get things started and then setting up 301 redirects, I think that's perfectly fine. You could probably also just do the redirects from the start and just saying, well, these are separate pages and they moved here. And the internal linking on both of those sites, if that's OK, then that should just work out fine. One thing I would recommend, though, just in general with regards to any kind of move like this, is that you track things really meticulously. So it's like really keep track of all of the URLs that you have before. Double check all of those URLs afterwards to make sure that they're all redirecting appropriately so that you don't run into a situation where you move some URLs, but you forget about others. And then after half a year, you notice, oh, like why is the wrong one being indexed? And then you set up redirects for those as well. So really, from the start, set up a really clear plan for how you want to handle the migration. And that also lets you use automated crawlers like Screaming Frog or some others out there to double check your new websites to make sure that you have everything set up properly. So that's kind of the one thing I would watch out for there. We also have tons of advice in the Help Center on handling these kind of migrations. But I think it sounds like you have a lot of stuff already under control. So probably you've seen some of that. How does Google crawl podcasts? Do links from guest podcasts from, say, Spotify make sense? So we have, I guess, there are two aspects here. One is we show some podcasts in the search results directly, and that's based on kind of the ability to recognize where podcasts are. And we have a lot of Help Center content on that, which gives you information on how to set up an RSS feed for podcasts, how to make sure that we can find your podcasts, all of those things. So that's kind of the main thing I would say with regards to podcasts. And the other is, of course, the angle of, well, actually, these are also web pages as well. And when it comes to web pages, you can treat those just like any other web page. The thing to keep in mind with podcasts in particular, when they're hosted on web pages is we don't try to do any kind of text analysis on the podcast file to recognize what you're saying within the podcast. So if there's something in there that you think is critical for your pages to show up in search, then make sure you put it in text form on your pages as well. A really simple way to do that is just to post the transcript of your podcast together with your pages with the link to the individual podcast episodes, for example. So in that regard, there is nothing really tricky around podcasts. It's just that there are those two angles, kind of like the podcasts themselves can be visible and the web pages can be visible in search. By Google standards, our pagination pages have become canonical by themselves. But the problem is that sometimes the next pages get impressed and clicked. And now we doubt whether the pages should be canonical to the first page or have the current steps. What should we do to prevent the next pages from being indexed? We've written a description for the first page and send the FAQ to Google with the schema. But is there a way that the next pages will not be displayed in the search results? Good question. So I think we get this question a lot. And it's something where it feels like there are differences of opinion with regards to pagination. And I think one way to look at it is to consider why you're setting up pagination on your site. So for example, if you have an e-commerce site and you have a category page and you have pagination on the category page, then one of the goals of that pagination is to make sure that the individual products themselves are actually findable. So you can go to page two, page three, page four, page five, and then you see the link to individual products there. And that can be really important to get those products indexed. On the other hand, it might be that you have all of those products already indexed because they're really well cross-linked within your website. Then the indexing through the paginated category pages is not as critical. And that's the first decision you need to make there. It is a critical for you that these paginated pages are indexed and findable in search in terms of is the content that you're linking from those pages also critical to be linked from and recognized from those paginated pages. If you decide that those paginated pages are critical for search and that the products are critically linked only from those category pages, then you need to let those be indexed. If you decide that these products are already indexed elsewhere, that they're well cross-linked within your website, then you could decide to say, well, actually page 2, page 3, page 4 is not something that I need to have indexed because those products can be findable elsewhere. And if you decide that those pages don't need to be indexed, then of course you can do something like the rel canonical here to canonical back to the first page of that list. If you decide that they do need to be indexed, then you'd need to canonical to the individual paginated pages. Usually the next level from here is when you apply filtering or when you have other kinds of searches within those category pages. And usually that's the step where you can say, well, these definitely don't need to be indexed because they're already indexed through either the paginated pages or normal cross-linking on the website. So all of those filters, that's something probably you can block from being indexed with either a noindex or with the rel canonical to go back to your main category pages. So those are, I guess, the two aspects there. It's interesting when we talk with the engineering teams at Google, we will get very strong opinions and they will go in both directions. And I think that is reflected in the fact that websites are set up very differently and some of them need those paginated pages to be indexed and some of them just don't need to have them indexed and they can focus more on the page one of the search results. In general, though, if those pages need to be indexed so that we can find your products, then they can theoretically appear in the search results. So there's no way to mark those pages as, I want you to use these for indexing, but don't show them in search. That's not something that you can flag with a meta tag. In practice, in a lot of cases, though, if you set up pagination so that you're linking from page one to page two to page three to page four, then those further paginated pages will be even further away from the root of your category set. And usually that means if someone is looking for that category set, then probably they'll find the first page from the category set that you have there because that's just the strongest page on your website for that category. On the other hand, if someone is searching for maybe a category and a product that is only listed on page five, then maybe we will show page five in the search results. So lots of options, lots of variations there. It's something where it's easy to talk with people and they have very strong opinions about which way you should handle this. And it's worth considering that your site is kind of unique and you need to figure out what makes sense for your site there. What should be the best thing to do with products that are out of stock, but they might be back in stock? Oh my god, Mihai, did you plant this question? No? OK, I think we talked about this briefly before. Oh, I guess this variation here is that they might be back in stock again at some point. And that seems like something, if you're sure that this is more of a short-term kind of out-of-stock situation, then I would just keep that page up. And if you're sure that these are unlikely to come back in and you're sure that you're not getting any traffic for them anymore, then I would just make them 404. For the last two weeks, page actions like request URL for indexing in Search Console have been temporarily disabled. Do you have an update? One this will be back to normal. I don't have an update at the moment. I know the team has been working on this, but I don't have any timeline that we can share there. Usually, these kind of things settle down fairly quickly, but sometimes there's more involved than just tweaking things slightly. And we also don't want to just re-enable the feature in the UI and not actually do anything with the submissions in the back end. We just want to make sure that everything works well there. On a large website, if a few pages have a few 404 outlinks within the domain, can Google consider that the page is low quality, outdated, or not fresh? I don't think so. So it's something where users might get a little bit confused if they go to your site and then get lost within your website. But in general, for Google, it's completely normal that some links are just 404. And when we crawl and index one page, we don't necessarily follow all of the links all the time directly, but rather we will crawl that one page and then we will add all of those links to our scheduler and say, we should take a look at these other pages as well. And if at some point we crawl those pages and they turn into 404s, then we're just like, well, this link led nowhere, but that's fine. Like, we have enough other content on the website to focus on. So we wouldn't flag a page as being low quality just because there are some 404 links on a page. As the EAT algorithm is becoming more prominent to improve the overall site quality and important passages in web page are becoming a ranking signal, can we consider including more FAQs and landing pages to be competitive in Google search results? So I think the different sites there don't necessarily have much to do with each other. We use EAT as something primarily from the search quality raters that we include in the quality radar guides. It's not something that we would say like there's a simple algorithmic factor that just does this. So that's something where essentially what we tell the search quality raters to watch out for are things that normal users would watch out for as well. And if you improve your website in that regard, then that seems like something that would make sense, regardless of any algorithms and changes on Google's site. With regards to including more FAQs on the landing pages, you're always welcome to do that. I think sometimes it makes sense to include FAQs on pages. Other times, especially when talking with some of our tech writers, I've heard that having too many FAQs on a page is kind of a sign that your content itself is kind of hard to understand, and almost a signal that you as a writer should consider making sure that people don't run into these questions after they read all of your content. But sometimes having simple questions and answers on a page makes a lot of sense. With regards to the structured data side of FAQs, that's something that's totally up to you. If you have pages with FAQs that match our guidelines, then we might be able to pick that up and show that in the search results. That's, I think, less a matter of being competitive in the search results, because it's not that you will rank higher with FAQs on your pages, but more like you will be visible potentially with those FAQs being shown. Just a quick follow-up to that. I noticed that some travel websites are using FAQs for their hotel pages, and then you have an FAQ, like, does this hotel have a swimming pool? How's the breakfast and things like that? And I'm not saying that's an issue or not. I think that looks fine. But the documentation mentions that whatever is in the FAQ structured data markup should also be on the page. I'm just curious, should it be word by word? I mean, if they have, like, hotel facilities and you have pool, do you still need to have that complete text answer? Like, does this hotel have a pool? Yes, this hotel has a pool, and whatever you're showing up in the structured data markup should be word for word what you have and the content of the page. I think that's kind of the goal. So with FAQs based on the content itself, I think that's a little bit tricky, because, like, the individual words might be on the page, just not in that order. But in general, with structured data, one of the things that our algorithms do is really try to make sure that the content itself is on the page. And it might be with FAQs that these algorithms are a little bit more flexible in that they say, oh, well, most of the question is directly visible on the page. Therefore, we'll let this one go. But it is something where potentially those algorithms might get a little bit more strict and say, well, this text one-to-one has to be visible on the page, because we want to avoid the situation where we're promising something to users in the search results, and they go to the page and they can't actually find it. They have to go for the information, yeah. So it's like, does this hotel have a pool? And you go there and you search for does this hotel have a pool? And then you're like, there's no results. Like, why was that shown in search? Whereas if you search for just pool and it says yes, then it's kind of tricky. But I could imagine our algorithms are currently in a state where they're a little bit more flexible, but I don't think that's something that you can rely on. Whereas if we really have in our guidelines, you should make sure that this text is on your page. Then at some point, our algorithms might say, oh, we really need to make sure that this is actually on the page. Yep. A question related to the discontinuation of Flash. Oh, wow. Flash from the past. I work on an online gaming site where we still serve quite a lot of Flash games. In terms of quality, they occupy roughly 50% of the pages on the site. But in terms of organic traffic, they only account for about 10% of our total traffic. After December 2020, should we anticipate a complete loss of traffic only on this portion of content or will there be wider site repercussions if we don't take steps to remove all of those pages that serve Flash games? So that's, I guess, an interesting question because I think what you're referring to there is that Flash is, I think, going to be removed from Chrome or something around like that. In general, this wouldn't affect how we index this Flash content. And I wouldn't see it as something where we say, well, there's Flash on this page. Therefore, this page should not be shown in search. But rather what I imagine is probably already happening here is we're indexing this page based on the content that's visible in the HTML that's visible in the DOM when the page is rendered. And if there just happen to be some elements on the page that we can't process because they're in shockwave files or whatever, Java applets or I don't know, whatever technology is still in use, then we will essentially just ignore that part of the page. We won't say this page as a whole is bad. We won't say this page as a whole should not be indexed because we have enough useful information from the rest of the page. So probably in your situation, my guess is that nothing big will change with the traffic that you're seeing to your website because probably we're already indexing your content based on the HTML content that you provide for these games, not based on the content within the Flash file. In the past, it was a little bit different because some sites were completely made in Flash. And there it was such that we did use the content from the Flash files as a part of indexing. But when it comes to games, there's usually not a lot of textual content in these Flash files anyway. So I would expect that we're probably indexing things just based on the HTML anyway. So probably you wouldn't see any change at all in search. We were unsure whether the consent management provider banner would negatively influence the crawling of our pages, therefore excluded Googlebot via the user agent. This means Googlebot doesn't get to see the banner at all, but the user does. Can this procedure be considered as cloaking and lead to a penalty? It does kind of go into the area of cloaking, especially depending on the way that you have those banners set up. And I don't know, judging from your name, I'm guessing maybe you're based in Europe. One of the things that might come into your side's advantage is that if you only show this banner to users in Europe and you don't show it to users elsewhere, then in general, you wouldn't be showing it to Googlebot anyway because Googlebot tends to crawl from the US. So that's kind of one approach to take here. The other side here is that when it comes to cloaking and a penalty or manual action, usually the web spam team tries to take into account the intent of what the site owner is doing there. And if we can recognize that this is clearly a legal interstitial and we're essentially indexing the content that users would see anyway when they go through that interstitial, then probably from a web spam point of view, that would be less problematic. So that's kind of the other side there. That said, I would still, especially if you're showing this banner to everyone, I would double check to make sure that it actually is causing problems for the indexing of your page. And maybe you can really just simplify it by also showing it to Googlebot there. One way to double check with regards to Google indexing, if this banner is a problem or not, is to use the Inspect URL tool and Search Console, fetch a live version of the page, have it rendered, and then you can check in the HTML of the page to see if the actual content is visible there too. And if the actual content is visible there too, then that's something where the banner probably doesn't cause any issues for indexing anyway. So maybe one of those options is something that would work for you. On a non-medical, high search volume niche site, do quality raters manually rate check top sites of the niche and ask Google to demote the rankings of the site? No, we don't do that. So quality raters are essentially one of the ways that we test algorithm updates, where we say we would like to make a change in this particular direction, and then we will give information to the quality raters about that change that could be in the form of A, B search results, where we say this search result versus a different one, and we generally give them information on what to double check for. Should they just look at the titles of those pages? Should they actually look at the content of those pages? Should they look past just the content of the landing page? Anything like that. And the quality raters will kind of review all of these different search results and give us advice and say, well, the version without your change was better, or the version with the change was better. And based on that, we can kind of work step by step to improve the quality of the search results. What they don't do is say this website is bad, and you should demote it in search kind of thing. So just because a quality writer looks at a website doesn't mean that they're going to change the search results just for your website. They review search results on a larger scale, and any search changes that we make tend to be reflected in a large number of sites. So just because one site goes one way or the other way doesn't mean that the search results will change overall just because of your website. So I don't know. I guess the short answer is really just no. The quality raters don't manually rate websites and make changes or make suggestions with regards to changes for the rankings of individual websites, but rather they try to review the changes in the algorithms overall. One of my clients' English version and French version of website design is totally different. When implementing hreflang, will this be an issue, or should both versions look the same? That's perfectly fine. Sometimes different language, different country versions look completely different. Sometimes the audience targeting is slightly different. Often it's such that the words are not translated one by one, but rather you do keyword research in one language and you do keyword research in the other language. And they're essentially trying to cover the same intent, but the content and the way you present that can be completely different. So just because the website is a little bit different, that that would be totally fine. Is it OK to submit the same sitemap in a sitemap index file and separately in Search Console? Because when we submit it separately, we see index coverage of the individual file. You can definitely submit them individually and separately if you want to do that. In general, we will crawl the sitemap file once, but if you're looking at the metrics in Search Console and you want to see the overall metrics and the individual metrics, then submitting that separately is perfectly fine. I don't think it would change anything. How does Google treat a 302? What URL does Google show in the search results, the original or the redirected URL? What URL will Google Search Console report clicks? And what happens if we keep the 302 for a long time, like months? So these are all good questions. These all go into the topic of canonicalization. And when it comes to canonicalization, essentially the problem that we're trying to solve is we know of multiple URLs that lead to the same content. And it could be that it's exactly the same content, which is usually the case with a redirect. It could be that it's mostly the same content, which could be the case if you have the same article on different parts of your website, for example. And with canonicalization, we try to take into account all of the signals that we have for those sets of URLs and pick one of the URLs to be the representative URL to use in Search. And that's the canonical URL. And we use things like redirects 301 and 302 redirects. We use the rel canonical, external linking of the website, site map file, some other factors like which of these URLs looks nicer. And we try to compile all of those with different factors and figure out which of these URLs to pick as the representative URL. And we'll try to use that. Assuming everything else is the same and you have one with a 301 and one with a 302 redirect, probably in the beginning, we will say 302 redirect means the original URL should remain indexed. And 301 would mean the destination page should be indexed. However, 302 is a temporary redirect. And if you make that redirect permanent, then our systems might say, well, it's kind of like a soft 301 redirect in the sense that you're not saying it's a 301, but you're treating it as a 301. So maybe we should take the destination page into account. In practice, all of these kind of subtle things tend not to matter as much. It's really more a matter of the bigger picture. Within the internal linking of your website, which of these pages are you linking to? Within your segment file, which of these pages are you actually referring to? And all of these things add up. And then usually, we'll try to pick the one where you're really being clear on what you want to have indexed. The change that you see in search is just that we show this URL in the search results. We will report on it in Search Console as well. It doesn't mean that there's any ranking change. So if you have two URLs and we picked the wrong one or the one that you didn't want, it'll rank exactly the same. It's just, well, we picked the one that you didn't want. And if you look at Analytics or if you look at Search Console, you have to think about the one that you didn't want instead of the one that you actually did want to have indexed. So it's more a difference with regards to reporting rather than with regards to ranking. John, I've got a question for you. Sure. In case you do a 302 redirect, I mean, in the long term, Google has said it. You know, potential in the long term treat them the same. But the question is, if you 301 redirect and undo the 301 redirect after a while, could it be the case that Google afterwards wouldn't trust on 301 redirect and say, OK, we've seen in this case removed over and it came back to the original states? I mean, how does Google treat it in this scenario? Usually the canonicalization algorithms don't think that much to, I don't know, to simplify it. So it's not that they would say, oh, it's like this guy is trying to be sneaky and kind of like redirecting back and forth kind of thing. The canonicalization algorithms tend to look at the current state. And they'll just say, oh, well, this redirect is no longer here, so this factor from the redirect no longer counts for that other URL. What do the other factors say? And based on that, we kind of pick which one to use as the canonical URL. So it's not something that, from a canonicalization point of view, we would see as something that our algorithms have to kind of second guess. But it's more we have all of these different factors. We give them individual weights. And then we calculate the factors together. And which one of these URLs has a higher number in the end? That's the one we'll try to pick. And one thing that also plays into this a little bit, which might be something that you would see in a case like this, is that we try to be a little bit more persistent with regards to the canonical URL. So if you redirect somewhere and then you change back, then our algorithms are not going to just swap back over to the old one. But rather, they'll say, oh, it's like an unsure state at the moment, but we'll keep the old one first. And then over time, we'll say, oh, well, it's clear now. Like, this is actually a change in the preference from the side order. A long question. And how long does it take for Google to recognize 302 redirects? I mean, I know that you can give an exact time, but approximately how long? I don't know. To recognize them as being permanent, you mean? Or? Yes, exactly. I don't know if we have any defined time for that. I'm trying to think. I think in the robots text documentation, we have a similar case documented with regards to, I think, 403 errors and 404 pages where 403 say you're not allowed to request this page. And 404 would say this page doesn't actually exist. I don't know if we have specified a time there, but that might be something to look at to see if there's a time there, kind of as a rough guess, like what might be the case with 302 and 301 as well. I don't think it would apply one to one, but it gives you some idea of the order of magnitude. My general guess is it's more a matter of months rather than days, because sometimes 302 redirects just happen to be in place for a longer period of time. There's still 302 redirects. Thank you. All right. We're kind of running low on time. So maybe if any of you all have any questions left, happy to answer that. I have a question. I joined the meeting about 15 minutes later, and I don't know if you had time to see my question that I said there. Go for it. Many questions here are regarding migrations, and we have about the same issue. We have two e-commerce websites on two different domains, and we're migrating to one single domain, more inclusive domain. So we're basically migrating all the products, not the categories, so we're really interested in having the products. Well, having a good ranking on the website C, on the third website, where we're moving the product. So we have been thinking about how to do this, and we came up with two ways, SEO-wise. So first one was using the Google Change of Address tool and redirecting all the traffic from the products from website A and B to website C. And the second one was just leaving the website A and B where they are and replacing the e-commerce buttons with just some links to the new websites. So it is like you find on many websites a button called Buy From Amazon. It's just a link to the other page. Now my questions here, because we just thought about these two options to do this migration, would you propose a better way of doing this SEO-wise? So the Change of Address tool you wouldn't be able to use there, because you have two domains that you're moving into one. You would only be able to do it for one domain moving to a different domain. So that's kind of the first thing that you'll notice, like if you try to implement that in practice. You can still migrate like that, though. You can set up 301 redirects from those two domains to that one central domain. That's essentially a legitimate site change that you can make. It's kind of like combining multiple sites together is always a little bit tricky, because you don't know exactly what the outcome will be. But it is something where you can make this kind of change, and usually it ends up fairly well. So that's probably the direction I would take. The other approach with having links to that shared kind of checkout site that you would have, I mean, you can go down that direction, but I feel in the long term you're kind of competing with yourself, because you would have essentially three websites for the same content that you would have otherwise on just one website. And by having it split out across those three websites, you're kind of diluting the value of your content there. So that's... I understand that, but here is my reasoning for that. So if we go the other route and we just do the redirects, the probability of a product, for example, we have about two or three products that are really, really good ranking, like in the first place for the searches. So we would not be able if we would lose that ranking to complete the entire traffic just by PTC. So SEO is really important. So we can lose that for like six, nine, 12 months. It would be the end of the business. So it's really important for us to do that. And this was the reason, this was a loophole that I thought we could use at least for a while until website C gains sufficient visibility in order to shut down websites A and B. So this was the reason why I proposed this version because many people that use this other approach of just redirecting traffic report the traffic just fails. Yeah, usually it works well. I think we kind of end up with all of the cases that don't work well here and you hear about them. But usually that works out fairly well. But I totally understand kind of the worry that this is something that could be critical. I think the only thing you'll notice is if you keep those two websites and you also set up the third website, is that the third website will have a really hard time, kind of gaining visibility in search, which means you'll kind of have a really hard time being able to make that cut-off change. Because if you're not redirecting the value from those existing websites to your kind of shared common version of the website, then essentially you're creating another website and you kind of have to work to promote that other website as well. So my worry there would kind of be that you can do that and you can kind of keep those individual pages, but you will never run into this situation where that third version, that new version, is suddenly more visible in search than the existing previous ones because you never kind of like pass that value to that new version. Yeah, we were thinking if those other, the first two A and B remain there, just like some blogs that just redirect some traffic to the new one, hoping that the new one will gain a better position. And for the example, if we see the third website getting closer to the positions that the other one had, we just take them up, take the first website down and do redirect to the new website. And in the end, Google would just look, probably Google would just look at the data, we'll see that there's nothing there, just a redirect. There's no data on the old website because we just like deleted the website. And probably in the end, the new website will take place, replace that ranking and we will kind of keep our position, but I'm not sure about that. Now, I mean, this also seems like something that you can test individually, where you can kind of take that more kind of safe, I don't know if it's really safer, but kind of the more kind of slower approach of keeping those two websites and then doing on a per product basis, kind of setting up redirects. And that would kind of give you a little bit more confidence in is this new domain picking up the value from the old one or not. Okay, so this is a good idea. So I can try just to redirect half of the products and maybe keep half of the products with the buy from website C option and see how that plays out. Now, I mean, it's like, when we talk with the engineers about these kinds of situations, usually they just say it's like, oh, you should just redirect everything to the right version, but I totally understand if people are a little bit, I don't know, worried about a big change like this because you can't really test it ahead of time, like you could test something from a usability point of view, like do users like this one or the other one better? A redirect is kind of like this one shot thing and then it's there or it's not there. Okay, thank you very much for your time and your answer. Sure. John, are you certain that you cannot move multiple sites? You cannot use the change of address for multiple sites towards a given domain because I thought it was possible. I think it was possible with the old version, but with the new one that's there, I think for like a year now, you can really just have one change of address set up for a specific site. Okay, I do have a site in Search Console that says two of your other sites are moving to this site. Oh, okay. I have to read it. But to be honest, this is the case for HTTP and HTTPS, I think, or dub, dub, dub and non-dub, dub, dub or something like that for the same domain. So maybe it doesn't apply for different domains, but just saying that it's... Oh, you're using it for dub, dub, dub and HTTP? That's interesting. Okay, because I thought we kind of tried to check for that and preventive people from doing that because it doesn't make that much sense as a site move, but okay. Interesting, yeah. But I think that doesn't change much with regards to the question where it's like, should I do it or should I not do it? Yeah. Cool, okay. Well, thank you all for joining in. Thanks for all of the questions that were submitted. I hope you found this useful. And the next one in English, I think is lined up for Friday. German one is lined up for Thursday. A JavaScript one is tomorrow. Wow, yeah, so many things happening. Cool, all right. Thank you all and wish you all a great day. Thank you. Stay safe. Bye. Bye. Bye. Bye, thank you. Bye.