 All right. Welcome, everyone, to today's Google Search Central SEO Office Hours Hangout. My name is John Mueller. I'm a search advocate on the Search Relations team. And part of what we do are these Office Hour Hangouts, where people can join in and ask their question around their website, around web search, and we can try to find some answers for you. Bunch of things were submitted already, a lot more than I expected. I thought this would be kind of fairly low-key one, but it's always good when things get submitted. But if any of you want to get started with the first question, you're welcome to jump on it. Yeah, so hi, John. We have a question that is kind of keeping us on the edge for quite some time, because we would really love to know when the URL inspection tool will start to work again. Do you have any suggestion or any possible time frame, because it's not available for six weeks? And we had some great results and really fast indexing in the past with the URL inspection tool. And so a lot of our content, the crawling of our content, is obviously stored this way. And we would just be very interested in a possible relaunch. I don't have any news on the timing there. But in general, websites should be able to get crawled and indexed normally within reasonable time without using manual tools like that. So my general advice there would be if you rely on this for normal content that you add to your website is to think about ways to really improve the quality of your website overall, so that our systems are kind of keen to go to your website all the time to get the freshest information. So to me, that would be kind of a signal that something on your side isn't the way that our systems would ideally like it to be. And less that you should use this tool as a way to do normal site maintenance and normal updates. I definitely see its uses in cases where something urgently needs to change. And you made a mistake on your website, and you need to get that updated as quickly as possible. That's something where I think this tool is fantastic for. But for everything where you're updating things, you're adding new posts to your site, if you need this tool, then it feels like you're kind of covering up the actual problems on a website. Because from our point of view, search should just work automatically. It's not something that should require any kind of manual intervention. Yeah, thank you very much. So we'll look into that. So it's not that the crawling is then too long or has any crucial problems. It's just that it's obviously, in our experience, way slower than when you manually trigger the indexing. So that's just because we're curious to see when the inspection tool will be upheld again. I don't have any updates on that. I know the team is working on it. So it's not that it's going away or anything crazy like that. So it is something where internally people talk about it and say, well, people shouldn't really be using it for any kind of normal site changes. They should try to find ways to make that update automatically with site maps and all of that. But yeah, hopefully it'll be back soon. Cool. Any other questions before we get started? If I may. Sure. OK. First of all, thank you for having this call and for giving the information and answering the questions. I hope you will do that. My name is Alexei Ivanov. And I'm from Russia. I represent Rambler and Media Holding. And we have a question. We have asked at the page where you have been gathering the questions. But the thing is, when you sort the questions by popularity, it just disappears. And when you sort them by time, it is the first one. So I wasn't sure if you will be able to see it and answer it. So if I may ask it, it will be great. So the question is quite specific. And it relates to some localization. And the question is about live blog posting markup and the badge, live badge. So what we saw that, for example, if you take bbc.com and search for some articles from bbc.com from the US, you might find that it has a badge live on the result. And the same badge you can see if you search from Russia for them articles. So we did this markup, too. And we found out that for our website, it's Russian, this works if you search from the US. And we have the badge live. But it doesn't work if you search from Russia. And our goal, as you might understand, was to get this badge in Russian results. So any comments on that, maybe you can tell us that this works with AMP, for example, because we search through the documentation. And some details can be found like a tip. Use the live blog posting metadata markup. So your blog can be integrated with the third party platform features. So any comment? I don't know the specifics there. I thought the live blog posting was also something that we're just trying out with individual sites at the moment. So that may be something that is playing in there. But if you could drop maybe some examples into the chat here, then I can pick that up afterwards and pass that on to the team to check out. Or if you have the examples in the question itself, then I can pull it from there. No, we don't have examples in the question itself. But I think me or my colleague will drop some screenshots into the chat here. Fantastic. Cool. OK. Happy to take a look. Thank you. Hi, John. Hi. So I had a follow up on URL submission in Search Console. So this tool is going to be live with more features or with the same features that we had already. Because the way it is taking so much time, it could be we feel like it will be also revamped. I don't have any updates on that. So you'll have to stay tuned. OK. Cool. Hi, John. Hi. Hi, this is Derek from Singapore. So I do have a question regarding conflicting HREFLANG signals. So let's say we have implemented the correct HREFLANGs using sitemaps. But because of some reasons, the web pages also have another set of HREFLANGs in there, even though they are not the most correct version. I know that we have to try to minimize conflicting HREFLANGs like this. So my question for you is, how does Google actually treat these conflicting HREFLANGs? Do they prioritize the sitemaps over the on-page HREFLANG, vice versa? So any comments for that? What kind of conflicting HREFLANGs do you have? So for example, I may have a HREFLANG directive that is correct in the sitemap that says that this page is meant for English US users. But then the source code for the same page may be showing French US. Yeah. OK. So what would happen there is we would combine those. So from our point of view, HREFLANG is not something where we say you can only have one language or country version on one page, but rather you can have multiple country versions on the same page. And you can have multiple different levels. So you could say, this is the page for English in Singapore, English in US, English in UK, and you have a different page for English in Australia, for example. But you can have one page with multiple country regional targeting on there. So if you have some HREFLANG in the HTML and some in the sitemap, then we would try to combine that and add that together. And that means that if you have multiple different country versions across those different things, we would just combine that into one setup. The one place where it would get confusing or where we would see it as conflicting is if you have one country language version on the page and you use the same country language version for a different page in the sitemap file. That would be the kind of situation where our systems would probably have to guess. And as far as I know, we don't have any prioritization where we say sitemaps is better than HTML or better than the headers. But rather, we would just see, well, this doesn't work, and we would probably just drop that pair. OK, thank you so much. Sure. John? Sure. Hi. How are you? So if a publisher owns several different sites and sometimes promotes or recommends the same content across its network, but they're not trying to do Google as that it's original, it's just part of they want to signal that it's part of the same network, what should they do to signal that it's not original content but just being promoted or recommended? Should they use a rel equals canonical or is there some other mechanism? Yeah, usually the rel canonical is the best method to let us know that this is your preferred version. And from a practical point of view, it's not so much that the web spam team would get upset about that situation. It's just that the publisher is spreading themselves thin. And instead of saying, this is one really strong version of my content, it's like I put it on 20 different sites and they can all rank maybe. So it's kind of like diluting the value. But with the rel canonical, you can help to concentrate the value on the version that you prefer to have indexed. Yeah, I think it's more of not trying to even, it's more of a, here's our network. And if you're interested in this, you might also be interested in this. It's not really to rank, in fact. It's just to recommend. Yeah, yeah. I mean, that's a totally common use case for that. OK. All right, thank you. Sure. John, quick follow up on that. So in cases where there's a network with, it's the same business, but it has like 10 websites, let's say. And each website kind of links to another one using keyword rich anchor text. Does Google determine that, oh, it's from the same business, so it's kind of like an internal link? Or can it be a problem? Can it affect or attract a manual action or anything like that? For the most part, we can figure that out, and we treat that as normal links. So it's not problematic. I think it would get problematic if you have a really large network of sites. And if it starts looking more like, oh, you're just using these links for SEO means rather than to kind of like say, well, these are other places where you can also get similar products or you can find our business online. Yeah, I'm asking since there's a thread in the private forums regarding that and somebody has a manual action. And within the manual action, a few examples of those links are from a website part of the business's set of websites network. So yeah. I mean, usually with manual actions, there is a much bigger pattern involved than just like, oh, there's one bad example in kind of the set there. It's really kind of a bigger picture problem. So it can happen that some of the examples are kind of like, if you look at them, you can be picky and say, well, this link is actually OK. But if you look at the bigger picture, it's like, well, there's actually a bigger pattern here. And this is just like one kind of thing that fits into the bigger pattern but is also kind of something debatable that you could talk about it. OK, so should that webmaster do anything about that link? If he or she sees it in the manual action example links, what should they do about it? So my recommendation there would be to fix the bigger issue. Like the broader overall issue first. And if that's something specific with that link where you're like saying, well, actually, I'm going to keep this link because I think it's right, then that's something I would include in the text of the manual of the reconsideration request. And kind of like say, there are five links. And it's like, I cleaned the bigger issue that the five links point at. And there was one extra link there that I think is OK. And I kept that. And one more thing regarding this. So as far as I understand it, this is a network with very big, very large websites. Like they have millions of pages. And the business sometimes buys other websites. So in this case, the links in those two examples that the team sent were from a property that the business just bought. And it was a very large million pages website. And so can this pose as a problem in the sense that if they just bought the website, Google has no way of knowing that it belongs to that business now or anything like that? Maybe. I don't know. I'd have to take a look at the thread. It sounds like there is a bigger story behind it. OK. Yeah, OK, cool. Cool. OK, let me run through some of the questions that were submitted. And we can chat more afterwards as well. I don't actually sort them in any particular order when they come from the YouTube comments. So it's like just taking them as they show up here. I think this is generally sorted by whatever YouTube's ranking system does. I don't know how you can game that. Do you treat anchor text that contains many words differently in comparison to anchor text that contains like two words only? I mean, do you assign more value to those two words when you compare it to anchor text that has like seven or eight words? For example, two words anchor text like cheap shoes and the seven words anchor text is you can buy cheap shoes here. Can you elaborate on that? So I don't think we do anything special for the length of the words in the anchor text, but rather we use this anchor text as a way to provide extra context for the individual pages. And sometimes if you have a longer anchor text that gives us a little bit more information, sometimes it's kind of like just a collection of different keywords. So from that point of view, I wouldn't see any of these as being better or worse. And it's something where, especially for internal linking, you want to probably focus more on things like how can you make it clear for your users that when they click on this link, this is what they'll find. So that's kind of the way that I would like to add it here. I wouldn't say that shorter anchor text is better or shorter anchor text is worse. It's just different context. Let's say you look at a page and test it to see what's there and you decide that it wasn't relevant after collecting signals. Let's say I improved that page to make it the best out there for those queries. Would you automatically test it and try to look if it's the best page out there, even though you've collected all those signals and it wasn't the best piece of content for those queries in the past? Absolutely. So pages can change over time. Pages can get significantly better over time. Pages can get worse over time. That's absolutely possible. So that's something where if we've collected signals and found the page to be like this in the past, it doesn't mean it will always be like that. So from that point of view, if you have pages on your site and you significantly improve them, our system should be able to look at that over time and say, well, this is a much better page now than it was before. So that's something that definitely gets updated. And I think the hreflang question we talked about briefly, I noticed that free websites created on business.site have a follow link in the footer that goes to the page that redirects to google.com slash business slash website builder. Isn't this against Google's guidelines on link schemes? I don't know. I didn't take a look at the exact example here. I think we looked at this one in one of the previous hangouts a while ago as this kind of started rolling out. In general, when it comes to these kind of links, what we try to look at is the specific anchor text there. So if it's something where when we look at it, it looks like, oh, it's promoting this website in a way that it uses very keyword-rich anchor text, then that would be more problematic. If it's essentially just linking to the URL or if it's using the business name as something that is linking to the website name, then usually that's less of an issue. So from my point of view, if this were any random website, I wouldn't really say much there. But since it is a Google property, I will pass it on to the web spam team just to double check to see that they're OK with this. I don't know what will happen there with the web spam team. It's very possible that we already ignore these specific links because these are the kind of links that are very easy for our systems to pick up and say, well, we can just ignore those. But it is always awkward to get kind of these reports that Google properties aren't doing things perfectly. What's the difference between these? But I think the problem is that everyone that asks those type of questions hopes that the answer is going to be, you got us OK, you're all right as well, rather than, OK, we'll fix our site. You're never going to go the direction of letting everyone else get away with it, rather than just fixing your own problem. So I don't know why people ask these questions. Yeah, I mean. Well, forgive all of your penalties. You're right. You caught us. Everyone's fine to do it as well. Yeah. It's just an answer question that gets nowhere. Yeah, I totally understand that it's frustrating, especially if a website got a manual action for buying links or doing this kind of thing in footer links across other sites. It is frustrating to see other sites kind of get away with that. But yeah. If you build a $100 billion network, then I think you have a little bit of leeway to do what you want. That's my opinion. I don't know. I think now. Yeah, I think it's worth also to hold our sites to a little bit of a higher standard just to make sure that we're really kind of doing the right thing. Yeah, but you'll be linking from one very high-quality website to another very high-quality website with a purpose. Is that not fine? No. I don't know. We'll see what the web spam team says. I look forward to your penalty. It wouldn't be the first time. I mean, one of the tricky parts with all of these things is that just because someone works at Google, they don't know all of the kind of ways to do SEO perfectly. So it is very common to see sites across Google do SEO in a way where if it were to come to the forum, people would be like, oh, gosh. But it's not my style, obviously. But I'm not sure Google should be punishing naivety. It should be punishing people that deliberately do things to game the system. If you don't know anything about SEO, then by definition, nothing you've done could be penalty worthy because you're not trying to game the system. Yeah, I think the way I see it more is to kind of level the playing field in the sense that it's not always the intent, but rather kind of the effect. And if our systems can't kind of neutralize that effect on their own, then maybe we need to kind of like manually just kind of neutralize it so there is no kind of negative effect on the search results. John, I had one follow up on this link penalty or link manual action. OK. So I was just going through link scheme document. There are two types of links, basically, that Google has documented. One is a first paragraph, which is talking about link scheme. And the next paragraph talks about unnatural links. So in links, I was just to know in links scheme, I was noticing it was like buying or selling links is against Google guidelines. That was one thing, right? Or excessive link exchange. You will give me link. I will give you a link. And the other thing was unnatural link, which was all talking about unnatural links, like keyword rich was mentioned over there. So I was a little confused about really Google considers both these sections for manual action or unnatural link is related to manual action. Because for the first section, link scheme, I never saw any manual action or any penalty, like some websites are having excessive give me link or I will give you a link. This kind of strategy. I have also personally saw a lot of websites buying links also in the app. They are just buying the links. But how Google treats them. Does Google also consider link scheme as manual action for the websites? Sure. Yeah, absolutely. I mean, it's always tricky to see this from the outside. Because just because a website has a manual action doesn't mean it will not show up in search. So unless there is something that is really problematic with the content itself, then we would keep that website in the search results. We would just try to neutralize the effect that is happening there. And sometimes we can't isolate that effect completely. And we kind of neutralize a little bit more than just that effect. But essentially, just because a website is in the search results and they're doing something shady doesn't mean that they're getting away with doing something shady or that they're ranking because they're doing something shady. So on the one hand, our systems try to automatically just ignore those things. And even if there is a manual action, then it wouldn't remove the website completely from search. So it means that Google also just neutralizes those link scheme links. Yeah. I mean, we try. I'm sure there are situations where we don't manage to do that automatically. And that's what the manual actions are for. And we can't catch everything on the web. But it is one of those things that we work on. All right. Thanks. Sure. All right. Now a question from the new crawl stats report. What's the difference between discovery and refresh? In our case, it's showing 84% refresh. Does that mean 84% of the time Google is crawling known URLs from the database and only 16% of the time they crawl are site maps, outlinks from URLs that they crawl from the known URL database? So I'm not 100% sure of what exactly we would put into each of those buckets. But in general, we do split things up into refresh crawling, where we try to update the information that we have on a site, and discovery crawling, where we try to find new URLs that we've heard about from the website, which could be things like from new internal links or from external links pointing at your website. So that's something where, for a large part, I would imagine, for most sites, a lot of the crawling is concentrated on just refreshing the information that we have. And that refresh crawl doesn't mean that we're just updating the page's content. We're also looking for new links, which we can then use for discovering new content. And yeah, that's generally how we deal with that. Does a link that qualifies as referral traffic, so it gets clicked, get more link equity than a link that has never been clicked on? I don't think so. On the one hand, I don't think we would be able to see what people actually click on. On the other hand, I don't think that would usually make sense. Sometimes links are out there, and they're really important references, but they rarely get used by users. But that can still be a useful reference. I have a question about indexing queries that are non-English. Does Googlebot actually go through translation algorithm in order to understand the meta title, meta descriptions before indexing them? If yes, is the same algorithm that is being used in Google Translate, and how can we be assured when it comes to accuracy? So for the most part, we index the content on pages the way that we find it. It's not that we try to normalize everything into English and then only index it in English. And then if people search, we try to understand the query and translate it and show it back again. It's essentially we index the content the way that it comes. So if you have a website in a language where maybe Google Translate is not so good yet, then we will still index that content in that language. And when someone searches in that language, we will try to map those words and point to them directly as well. So from that point of view, we don't need to translate things into different languages. That said, we did some experiments in the past. And I don't know if that's still live in some regions, especially in places where we don't have a lot of content. What we have done is when someone searches in their local language and we recognize that there isn't a lot of content in that specific language, then we may try to translate the query into a different language and bring results from that different language and show those as well. And say, like, you search for this in your language and here are some results that were in English and we can show you the Google Translate version of those pages. So it's not so much that we translate the pages before indexing them, but rather we take the query and then say, oh, there isn't a lot of information for this, but for the translated version there is. And then we try to search for the translated version of the query itself. And that's something you would see in the search results directly. If you search, then you kind of see that this is happening. I don't know if we still do this or if that's just an individual location, but that is something depending on your location that you might have run into. What about Google News appearing issue for new publishers after the December 2019 update? I don't have any update from the Google News side. I'm also not on the Google News teams. I can't really help you with those specific issues. My recommendation there would be to kind of go to the help form. In general, when it comes to Google News, it's something where what we show in Google News is kind of different ranking algorithms than we would use for Google Search. On many PDPs, we have very old user reviews, professional reviews, can old dates on those reviews create negative impact for that page. So PDPs, I think, are product detail pages, so especially on e-commerce site, if you have for one particular product, for example, a landing page. It's totally fine to have old reviews on a page like that. I don't think there would be any downside to having kind of those older reviews there. I think that can be very useful for users. Is search volume of a brand a ranking factor? How does Google know a brand of a website and show site links, click-through rate, or backlinks? I don't think search volume of a brand would be that useful as a ranking factor, because if someone is searching for that brand, we would already try to show that brand. Anyway, because probably the brand's landing page is very, very relevant for those kind of queries. So I don't think the volume of brand queries themselves would make a big difference there. With regards to knowing the brand of a website and showing site links, that is sometimes hard, sometimes quite straightforward when we can recognize that a website is very relevant to a specific query. And sometimes we can recognize that by the site wide information on the website and recognize, oh, well, this whole website is on this specific topic. Maybe that is the name of the website or maybe that is the brand of the website. So it's less a matter of people linking to this website saying, oh, this is a brand or a matter of people clicking on the links, but more like, well, this website is just very relevant for that specific phrase or word. Event-rich snippets. I have a list of sports events on my website that are imported through an iframe. Can crawlers read the data on the iframe? That's tricky because sometimes we can and sometimes we can't. So my recommendation there would be to think about what you would like to have happen here and to make it as obvious as possible for us that we should be doing that. So if you're embedding these sports events on your website and you want to have them associated with your website, then my recommendation would be to try to implement them directly on the page itself. If you can't do that within the static HTML, maybe there is a way to pull them in using JavaScript in a way that we can render the page and also see those events on the page. On the other hand, if you don't want those events associated with your website, then maybe using something like an iframe is fine, but probably you would also want to use robots.txt on the iframed content itself so that you can really be sure that we don't take them into account. So that would be my recommendation there. I think using an iframe and saying, I don't care if they get indexed, maybe they will, maybe they won't, that's kind of fine. But usually, when you're running a website like this and you're trying to optimize it, you have something specific in mind that you would like to have happen. And if that means you want those things to be indexed on your pages, then make it so that it's easy for us to recognize that. If you don't want them indexed, then make it so that we can recognize that they shouldn't be indexed. Can I use an anchor in an event snippet link? I mean, you can definitely use it, but whether or not we would show it, I think probably we would not show it. As far as I know, the only place where we would show links in the rich results would be the FAQ markup in the answers to the individual questions. I think, otherwise, we don't show links that are included in rich results. Do I have to satisfy all the criteria for event snippets to show? Other websites don't satisfy them, and their event snippets are visible? I don't know. I mean, it kind of goes back to the first part there, where it's like, well, what do you want to have happen? And if you really want these snippets to be shown for your website, then I would make it as clear as possible that everything is OK, that we can recognize them, that you're following the guidelines, and that you're kind of doing everything you can to make them appear. So that would be my recommendation there. And it's kind of regardless of what other websites are doing. It's really a matter of, is your website really doing everything it can to make sure that our systems can recognize that it's relevant and OK to show those snippets? I'm very interested in the development of web stories. As of now, they only appear on Google Search and discover in the United States, India, and Brazil, is there a time frame when they will come to Europe? I don't know. Good question. Whenever I think about web stories, I assume that they're already everywhere, but apparently they're just shown in individual places. The thing with web stories is also their normal HTML pages, so they can rank in the normal search results like any other kind of page. However, I think the embedding, especially in Search and Discover, where it's a little bit the more fancy type of embedding, that is something that is limited to individual locations at the moment. With all of these things, I can't give any prediction of when they will be expanded and come to other locations. Sometimes it's a matter of kind of refining the technical details and the UI. Sometimes it's a matter of policy questions as well. That's always a bit tricky and not something I can guess for other teams at Google. John, just curious regarding web stories, what do you do if you have a certain type of content that you wish to present in kind of a long-form format? Maybe it's thousands of words, but you also want to benefit from exposure via the Discover and Search web stories, carousels. Should you do both? Should you do just one? Isn't it with the web stories being kind of a smaller format in terms of text content? Can that be a problem? Should you canonicalize the two? I think it's totally up to you. So what I have seen a lot is that a web story is a really nice way to build interest on a topic and then to link to the more detailed page. Because a web story is usually something you would look at on your mobile phone. You're kind of like in a mood to be a little bit entertained and informed, so presenting it in a way that is visually compelling and interesting and then having kind of that read more link at the end. I think that's a pretty nice model. But how that works for your specific content is really hard to say. And it is also something where I've seen some sites do it really well in that they have really nice and visually compelling content, a little bit animated, and other sites essentially just use clip arts or just use images from a blog post, where when you look at them in web story format, it's like, well, it's not really that exciting. And probably what I mean, I don't know the details. My guess what will happen there is we might show this web story. But if users kind of get lost on the first couple of pages, we're like, this is not really that interesting. Then they're not going to click through to your site. So now. So in cases where a lot of the traffic is coming from desktop pages, can you have a fallback? So instead of showing the web stories, the same URL shows like a normal blog post and anything like that? I don't think that would work that well with Search, because we would index one set of content for that URL. And if you have a long-form blog post and you show Google the web story so that you get the web story ranking, then you miss a lot of information that is not in the web story. So that's something where I would tend to just see them as separate formats where maybe you're kind of referring to one or the other in the different locations. OK, but if it's not that much content though, is it OK to have a blog post, a normal blog post, and a web stories and the blog post canonicalizes to the web story so that Google shows, would Google show the normal blog post on desktop users and the web story for mobile users? I don't know. Yeah, my guess is our systems aren't kind of designed for that and the outcome is a little bit unknown. I mean, if it ends up that people do this very, very frequently, which, from my point of view, probably doesn't make sense because the amount of content is just very different. But if people ended up doing it more frequently, then I think we'd have to come up with a way to deliberately do this so that you can kind of like with a desktop and a mobile page, you can say this is my alternate for the desktop page and this is the canonical for the mobile page kind of thing. But at least at the moment, I don't think we have any provision to understand that one URL is both a web story and a traditional page. Yeah, yeah. Right, I was thinking since I think, if I remember correctly, with web stories, you can kind of swipe up and put more content there. So you kind of have a bit of content on the image that you're being shown. And then you can swipe up and see like additional text that by default is hidden, but users can swipe up and see the text. So you can put a bit more text there. Apparently, I haven't tried this. I've just seen an example. So I'm guessing you could kind of maybe smaller blog posts kind of retrofit them into a web story for me. I, my recommendation would be to try all of these things especially with web stories, with kind of the newer technologies. It's something where theoretically, you can talk about this in a long time. But it's sometimes more interesting to just try out and see what actually happens. And then you also kind of gain experience there. I know also the team that is working on web stories, they also run the web creators channel on Twitter. And they're currently looking for questions about web stories. So you might want to kind of like find their Twitter account and drop some of the questions in that thread there too. Yep. Cool, thanks. OK, let me just grab maybe one or two more questions and then we can move to more discussions. Some websites in the EU use a JavaScript-based GDPR cookie solution. And this prevents users from interacting with the site, navigating pages, reading content, or clicking links until they have either accepted or rejected cookies. Where this is implemented, will Google's crawling technology be able to crawl and index the content and internal links on the pages? I checked Google's cached versions of the pages, of various pages of various sites that use this solution, and the content does not appear navigable. The links are also not clickable. So I don't know this specific cookie solution. So it's hard to say. In most of the cases that I have looked at, we can crawl and index the content normally. So what is important for us with any kind of interstitial or anything that you use as a pop-up or as a banner on a page is that in the HTML version of the page, when we crawl it, we can find the information that you want to have indexed for that page. And that includes textual information. That includes all of the layout information, the CSS, the images, all of that, also the links on the page. And if those are in the HTML, even if they're blocked with some kind of an overlay or some kind of a banner on top, then essentially from an indexing point of view, we can crawl and index that site. And at least what I've seen with the different kinds of cookie banners that sites tend to use, that all tends to work out. And this is something that is fairly easy to test. On your site, you can use the Inspect URL tool to do a live test of your pages. And then you can look at the HTML version that Google actually uses for rendering and indexing. And within the HTML, you can check to see is my content. Actually, there are my links there. Is it formatted in a way that Google could understand? And if that's the case, then you're all set. And because these tools exist and have existed for a while now, pretty much all of the, I would say, mainstream types of implementations for this, they should be able to get this right. When it comes to the Cache page that we provide, one of the things to keep in mind is this is based on the HTML page that we have fetched from the server. And sometimes JavaScript can run within the context of the Cache URL, and sometimes it can't. And it depends a little bit on the JavaScript itself. The main issue here is that the Cache page is, of course, hosted on some Google domain. It's no longer hosted on your domain. So if you have JavaScript that requires that it runs on your domain and is not able to run on other domains, which is often the case with the security implications of JavaScript, then that JavaScript won't run. So a really common place for this comes into play is if you have a JavaScript-based website and you look at the Cache page and it's empty, then you might assume that Google is not able to index the content. But that's actually more kind of a technical thing that within the context of this Cache URL, JavaScript can't run. So you can't see that actual content there. And what might be happening in this case is maybe JavaScript can run for this particular kind of banner that you're using. And it's also blocking the Cache version of the page. So that's something where essentially that cookie banner thing that is running on the Cache version of the page would also apply there. But like I mentioned, the Cache version is essentially more a technical copy of the HTML rather than a representation of what we would actually index. And to check the indexing, you'd need to use the Inspect URL tool. OK, we have just a few minutes left. So maybe I'll switch to questions from you. And I also have some more time afterwards if any of you want to hang around a little bit longer. I have a question, actually, if you hear me, John. Sure. It is about actually programming languages and their documentations. Since we have so much versions in programming languages, they create really duplicate content or nearly duplicate content. So we are using canonical tags for making only one of them index. Or Google chooses one of them as canonical version and tries to index just one of them. But this is also creating a kind of confusion between programmers because actually we are using old versions too. But when we search for them, we can't find them on the Google search. So we need to find them in the stock overflow while with internal source system with the websites. And some of the developer size or programming language size are using canonical tag in a manipulative way. They are marking the new version as canonical from the older version. But actually, the content is different. Yeah, purpose is same, but the content is different. So what can we do to improve this situation? I think that's always tricky because usually the names are exactly the same. And maybe the way that you call the functions are slightly different. And it's hard to know. Also the queries don't have versions. So even if I search for, I don't know, ECMASsecret6 or ECMASsecret5, I am using just ECMASsecret in the queries. So I understand your point too. Yeah, so my usual recommendation for this kind of issue is to try to keep the current version of the URL stable and to move the older versions to kind of archive setup. So this is something that we run across with events or with different product generations that happens fairly frequently. If you're searching for an iPhone, you probably want the newest iPhone, but you also want to find information about the older iPhones. So what we recommend doing there is then having a kind of a stable URL for the current version, which could be, I don't know, which iPhone version is the current one, whatever number it is nowadays. And when a new device comes out to take the old current version and move that into a separate URL. So maybe you would have the new one is, I don't know, iPhone 15 or whatever it is. And then the iPhone 14 moves to a different URL, which would be iPhone-14 or something like that. And what would happen there is we would not lose those older versions of content, but rather we would notice that the stable URL is kind of the one that is the main version of the content. So if someone is specifically searching for just the newest iPhone, they would find that stable URL. But if they're looking explicitly for the older version, then they would still find the older version of that. And with programming languages, you can probably do something similar, where you say, well, this is the newest version of the JavaScript syntax, and this is the current version. And when something new comes out, then we take the current version, move it to an archive, and say this is ECMAScript 5, and kind of move that to an archive situation. OK, thank you for the recommendation. I will try that. OK. Hi, John. Hi. I have two questions about manual actions. The first is, after a manual action is revoked, is there a kind of time frame in which the website has less trust than it would have without the previous manual action? And the second question is, if you start using the disavow tool, what happens if you completely remove the disavow file after many years? Is there a risk of getting another manual action? Yeah, so when a manual action is revoked, essentially everything that is associated with that manual action is turned off. So if there were issues with links to your site or with the content on your site and the manual action is resolved, then that is completely resolved. Sometimes there are technical things on our side that just take a little bit of time to get updated. So for example, if a site is removed completely from indexing because it's just a scraped copy of other content, and we revoke that manual action, then it takes a bit of time to get indexed again. And that's not because we don't trust the website anymore. It's just for technical reasons, it takes some time. With regards to the disavow file, if you remove the disavow file, then all of those links will be treated as normal links again. And it could happen that at some point, the Web Fem team looks at that and says, oh, well, this is very problematic. And there is no disavow file here at the moment. So we will have to take a manual action there. And that's something that from a spammer point of view, we have sometimes seen, where people will say, oh, I get a manual action. I fix my manual action. I know what to do because I'm a professional at this. And the manual action is revoked. And they'll be like, OK, I will switch on all of this spam that I've been doing again. And that's something that the Web Fem team has also run across. And when they see this kind of switching back and forth happening regularly, then they may say, OK, the next reconsideration request, we will just wait a while to see what they actually want to do. But with, I would say, normal websites where after a couple of years you just remove the disavow file because you think it's no longer relevant anymore, your website is in a new clean state, then probably that is less of an issue. OK, thanks. All right. Maybe I'll just pause the recording here. If any of you want to stick around a little bit longer, you're welcome to do so. In any case, thank you all for joining. Thanks for all of the questions that were submitted. We didn't get through all of them. It looks like there's still a bunch left. I'll try to add some questions, some answers, in the Q&A on YouTube as well. But hope to see some of you again in the future. And like I mentioned, if you want to stick around a little bit longer, you're welcome to do so. Thank you, John. Thank you very much. Thanks. You're welcome.