 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where people can join in and ask any question around their website and web search. A bunch of stuff was submitted on YouTube, again, which is great, a good way to get these questions in. But as always, if any of you want to get started with the first question, you're welcome to jump in now. Hey, John, can I go first? Sure, go for it. All right, so my question is regarding a query site colon domain name dot com. So usually, when we want to see if the specific page is indexed in Google other than Search Console is when I do a site colon query directly on the web. I see the home page is not coming up, which was coming up back in the time. And any time I go back in the Search Console and resubmit it, I see it coming up for 24 hours, like for a day, and then it disappears. Surprisingly, that page is still coming up normally or working fine for one of the competitive keywords. It's not like it's one of the blacklisted page. If it was an internal page, I would not worry because of a variety of reasons. But since this is the home page, I don't know, should I be worried? And there are no signals in Search Console that is showing me an error. AMP version and all looks fine. Yeah, that's something that comes up every now and then. But it's not a sign of anything being wrong or anything broken. The main reason you're seeing this is that for site queries in particular, we don't have any defined order that we would show them. So it's very common that you would see the home page first or on the first page of a site query, site colon results. But it's not always the case. And it's not a sign that there's anything wrong with the website or anything kind of blocked or, I don't know, otherwise problematic. So if it's ranking normally, I think you're OK. Even though it doesn't appear at all in all the pages, I mean, at all is kind of tricky because of the way that things get filtered out from a site colon query, where we try to filter out things that are kind of duplicate, where we already have shown the same title and description. And because of things like that, the site colon query isn't really meant as a comprehensive list of all the pages that we have indexed. And since it's an artificial query, we don't have any kind of defined order for the site colon results. So nothing to worry at all. It's just completely natural to have the home page missing in it. Yeah. I mean, on the one hand, for site colon query, that's something that happens every now and then. It sometimes also happens that even within a website, some other page is seen as a primary page for the website. So even just from the point of view, what is your primary page? Sometimes that's not necessarily the home page because I don't know, a variety of reasons. Maybe some product that you have is what your whole company is known for. And that's some subpart of your website. Got it. OK. Thanks. Cool. I had a question. A lot of people who are doing personalization now are talking about adaptive content for their, like for example, their mobile home page versus their desktop home page. And with mobile indexing, how does Google feel about adaptive content where the content on the mobile version doesn't necessarily contain all of the same content that's on the desktop? Ultimately, that's something that you kind of have to decide for yourself. Because from an indexing point of view, we will index the mobile version of the home page. And usually, we crawl an index from US-based IP addresses. So if you're doing adaptive content by country or by language or by city or something like that, then that would also play into that. But essentially, we would index the content that we would see. And if you show users other content for other locations or other devices, that's kind of up to you. The only problematic case where someone on our side might get involved is when it gets into more malicious content. Where maybe you show, I don't know, comic books on the mobile version, which is the one that we index, and the desktop version is an adult website or something crazy. That's something that we don't see very often. But it can happen every now and then. But if you're adjusting the content slightly differently, if you're saying, well, on mobile, people don't want to read as much. So we have a shorter version of the page. And on desktop, we have maybe more links to PDFs or something like that. That's completely fine. Totally up to you. The only thing to keep in mind is that we'll just index the mobile version. So we won't look at the desktop version at all. If there's something that you want your site to be known for that's only on the desktop version, we might not know about it. Makes sense. Thank you. Cool. Thanks. I do have a follow-up note on that. What we have been doing is we'll try and replicate the same content considering for ranking. But for a mobile version, we'll probably add a link called Read More. So if someone wants to sit down while they're driving or anything on an iPad or something, they can. Otherwise, they see a very smaller version of the same piece of content. Since that is what Google is going to consider for ranking, I don't know if that makes sense, like if it's OK or not. I mean, from our point of view, that's OK. I don't know if users always appreciate that, because lots of people on mobile do want to see the full content. But that's ultimately up to you. So we'll just index the version that you have on the mobile version. And if that's all you want to have index, that's up to you. If you prefer to have more index, then make sure that more content is on the mobile version as well. Got it. OK. Let me jump into some of the submitted questions. And if you all have any questions along the way, feel free to jump in. Or if there's anything kind of related that comes up, feel free to bring that up. Let's see. I think this is a similar question already. A question around cloaking in general. In some situations, we display slightly different content for US IPs, for our domains, with non-English language to optimize the way that Google indexes our site. As Googlebot is majorly coming from the US, do we risk any cloaking penalty with this behavior? How does Google define or detect cloaking in general is the same IP, the same content, safe enough of a solution? So in general, the thing you would need to watch out for is that you're not serving Google or search engines, specifically different content. If you're serving all users in the US this content, that's perfectly fine. That's the version that will index. If users from other countries see slightly different content, that's ultimately up to you. In an ideal world, we would be able to crawl and index from all of these different countries and see all of those versions of the content. But for practical reasons, just crawling the web once is already pretty hard. So that's not something that we can easily expand to all other countries. So if you're showing content to users in the US and Google is crawling from the US, then that's what we would index. That's not something that we would consider cloaking. When it comes to cloaking, if the web spam team suspects there is cloaking happening, then they have lots of sophisticated means to test that. So that's kind of something where I suspect it's not anything you need to worry about if you're doing this in a reasonable way. But the web spam team does sometimes take time to figure out what's actually happening here and to make sure that it's not causing any problems in our search results. And then a question regarding 301 redirects. For strategic reasons, we would like to redirect all users coming from IP addresses, from a certain country, to a different domain. Let's say we have domain or abc.info, and we redirect every user coming from a Russian IP to xyz.org. xyz.org would have canonical pointing to abc.info. The reason is we want abc.info to rank and xyz.org to serve visitors from Russia. Do you see any risks here which might harm the rankings of abc.info? In general, this is something that you can do. I think you probably want to watch out that your users are also happy with this, because ultimately, if your users learn that your website is not what they were looking for, then they'll go elsewhere anyway. But in general, when it comes to internationalization, you can have a setup where you have something like a central home page that redirects users from different countries to different versions of your site. The thing to watch out for, again, is we will crawl an index from the US. So if you always redirect US users away from your Russian content, we would never recognize that you actually have Russian content, which might make it hard for you to rank for all of these Russian keywords, for example. But in general, this kind of a setup is something you can do. You can use hreflang to tell us about this and kind of flag the redirecting page as an X default page. And by that, we will generally learn which versions to show in search. It's not always guaranteed that we will show the kind of canonical URL or the one that you have the rel canonical to. So if we see lots of signals telling us, actually, your Russian site is the one that people prefer, then we might index that URL as your main website. So for canonicalization, we use the number of different signals. The redirects help us. The rel canonical helps us. But we also look at things like links internally and externally, sitemap files, all of the other signals that we have there. So that's something just to keep in mind if you're seeing that the wrong version is being indexed. What's a typical timeline for large corporate site to see sitelink search box to be removed from search results? I assume this is if you add the no sitelinks. I don't know what the exact meta tag is called. But there's a meta tag that you can add that removes the sitelink search box. If you add that to your pages, then generally, over time, we will drop the sitelink search box for that site. I don't know how long that takes to be processed. I do know that if you add the markup to provide your own sitelink search box with your own website, that can sometimes take quite a bit of time. So with some kinds of structured data, we can pick that up within a couple of days. With the sitelinks markup, I've seen it sometimes take a month, maybe a little bit longer. So sometimes that just takes longer to be processed. We changed one of our sites, title and description and content, three weeks ago. NFL schedule to the new season. And at first, Google displayed the new snippet and ranked the site quite well. However, after three days, Google displayed the old snippet and ranked us on page two. Since then, the snippet and the ranking are switching about every three days. Googlebot crawled the site several times. Since then, what's the matter here? I don't know. It's really hard to say without an example. This sounds like something that shouldn't be happening like this. So if you want to send me some examples of where you're seeing this happening, Florian, then feel free to do that. Feel free to ping me on Twitter if you want or add some more details to the question here. What's Google's policy on back button hijacking? Sites like Gear Hungry, once you visit from Google and click the back button, they take you to the home page instead of Google search results. Is there a penalty for such action? Or is it loud? It feels kind of sneaky. So this is something which I certainly wouldn't recommend doing because if people want to go somewhere, then it's probably a good idea to let them go there. I don't think they'd be happier if you just sent them to your home page. I'm not aware of any specific policies on our side with regards to back button hijacking. So I don't know if that's something that the various Google teams would really worry about there because it's really something more where you're annoying your users. It's not that you're causing problems in search. Question about EAT and YMYL. So EAT is expertise, authority, trustworthiness, and YMYL is your money or your life content. These are terms from the Google Raider guidelines that we put out. We're working with news websites. What tips can you give us about indication of content authors? Is it really necessary to make pages for each author, provide big info with photo, bio, links to social networks? I mean, that this really matters, that there are lots of work to do elsewhere. I think with all kinds of content, it's not the case that you can say this really matters and you absolutely must do it. I do think with a lot of news websites, especially if you're providing information that you want people to trust, then this certainly makes sense. So it's not something where I'd say it's the same as removing a no index meta tag on a page because that's really on and off switch. But if you're improving the content of your site, that works well for users. That works well for Google. So that seems like something that could be done. How to prioritize that versus other things on the website, that's really hard to do. That's where you almost need to use your experience and figure out what works well on your side. Let's see. We're publishing news articles, or news and articles. For example, we have 100 new articles every day. 10 of them give us 95% of the organic search traffic, another 90 go nowhere. We're afraid that Google can decide our website is interesting only for 10%. There's an idea to hide some boring local news under no index tag to make the overall quality of all publishing content better. What do you think? In general, we do look at the content on a per page basis, and we also try to understand the site on an overall basis to understand how well is this site working? Is this something that users appreciate if everything is essentially working the way that it should be working? So it's not completely out of the question to think about all of your content and think about what you really want to have indexed. But especially with a news website, it seems pretty normal that you'd have a lot of articles that are interesting for a short period of time that are perhaps more of a snapshot from a day-to-day basis for local area. And it's kind of normal that they don't become big popular stories on your website. So from that point of view, I wouldn't necessarily call those articles low quality articles, for example. On the other hand, if you're publishing articles from hundreds of different authors and they're from varying quality and some of them are really bad, they're kind of hard to read, they're structured in a bad way, the English is broken. And some of them are really high quality pieces of art almost that you're providing. Then creating that kind of a mix on a website makes it really hard for Google and for users to understand that actually you do have a lot of gems on your website that are really fantastic. So that's the situation where I would go in and say, we need to provide some kind of quality filtering or some kind of quality bar ahead of time so that users and Google can recognize this is really what I want to be known for. And these are all things, maybe user submitted content that is something we're publishing because we're working with these people, but it's not what we want to be known for. Then that's the situation where you might say, maybe I'll put no index on these or maybe I'll initially put no index on these until I see that actually they're doing really well. So for that, I could see it making sense that you provide some kind of quality filtering, but if it's a news website where kind of by definition, you have a variety of different articles, they're all well written, they're reasonable, just the topics aren't that interesting for the long run, that's kind of normal. That's not something where I'd say you need to block that from being indexed because it's not low quality content, it's just less popular content. In my search console account, I see a lot of URLs in the coverage report that returns status code 500. However, there are URLs that existed by the buttons for sharing the page on social networks under the path social network slash Facebook slash ID, for example, but those URLs are not returning a server error but a temporary 302 redirect towards a social network. How can I correct these errors when they're really not from our site? Okay, so that sounds like these errors are things that happen when you redirect to an external site and there are things where essentially on your site, everything is working well and the external site is maybe serving a 500 error or serving something else where for whatever reason it's not returning a normal 200 result. That's something that essentially doesn't bother us that much. We will flag it in search console because it is something where we tried to crawl a URL on your site and we ended up on an error page, which could be kind of a sign that there's something weird happening, but if you're aware of this, then that's generally not a problem. What you could do is perhaps block these by robots text, these URLs on your site because those are not things that really need to be crawled and indexed. So from that point of view, you can prevent that error from appearing in the crawl errors by blocking them by robots text. In general, I don't think that would cause any problems overall, so that's probably the direction that I would go there. What might happen is that if you do a site query for these specific directories on your site, then maybe we would show these URLs without a title, without a description because they're blocked by robots text, but if you do normal queries for your website and these aren't things that would be showing up in the normal search results, so that's not something you'd really need to worry about. Does Google ever use historic data when deciding how to rank a site or do the algorithms only look at the present and most recent snapshots? I mean, can a website build up goodwill over a period of time, which may help it? Yes, in some ways that that can happen. So in particular, if you have a website that has existed for a very long period of time, then that's something where the whole ecosystem around the website will have evolved around that. And there'll be links from all over the web for a longer period of time. And when we look at that website, we will see kind of the current snapshot of the website, what we recently crawled, but all of these signals that have been collected over the years. So that's something that can definitely play a role. There are also signals within search, specific to a website that might be collected over time. The ones that I've run into a few times are generally around adult content. So that's something where if a website for a longer period of time provided adult content, then our algorithms might start learning that actually we need to filter this using Safe Search. And if that website were to completely revamp the whole website and it was suddenly, I don't know, a non-adult website, then it can happen that for a period of time those Safe Search algorithms will try to stay on the safer side and might still filter that site in Safe Search for a while. So that's something that can sometimes take a bit of time to settle down. Similarly, with some of the quality algorithms, it can also take a bit of time to kind of adjust from one state to another state. So if you significantly improve your website, then it's not that from one crawl to the next crawl, we'll say, oh, this is a fantastic website now. It's something where probably over the course of a year, maybe sometimes even longer, our algorithms have to learn that actually this is a much better website than we thought initially. And we can treat that a little bit better in the search results over time. So that's something where essentially the historic data that you mentioned there is not historic in the sense that it's like 10 years old data, but it's more that sometimes the status of things maybe a year or even two years ago can still play into how a website is ranked in search now. Regarding one point I came across recently, does the link pass page rank on a page that I have temporarily removed in Search Console? Yes. So temporary removal in Search Console does not change anything about indexing of a page. It really just hides that page in the search results. So that's something where essentially from our side, we want to provide a way to make it possible for people to remove things as quickly as possible. And if we needed to go through the whole indexing system to do that, that would take quite a bit of time. So the temporary removal essentially just hides it in the search results. It doesn't change anything from the indexing side. For the indexing side, we still need to recrawl and re-index that page. If that page is now gone, if it has a no index meta tag on it, then we'll be able to take that into account. The next time we kind of have reprocessed that page and when the temporary removal expires, for example. So similarly, if you add a link to a page and this page is something that is temporarily removed in Search Console, then when we recrawl that page, we will find that link. We will treat that as a normal link, as would any other link in indexing. And we might crawl the page that is linked or we might pass some signals to the page that is linked from there. Can I use a 301 domain after a year or so? And what amount of page rank will transfer? Not quite sure how you mean that. But in general, if you're moving from one domain to another, then 301 redirect is the right way to deal with that. And with a clean 301 redirect, if you're redirecting every page one by one and there's nothing left on the old website, then we will try to pass all of the signals as much as possible. And the signals include page rank. We use a lot of other signals, though. So it's not really not just page rank that is being processed there. If you wait with this redirect, if you do a partial redirect from the website, if you do a redirect in a way that makes it hard for us to recognize that this is a one-to-one site move, then that makes it a lot harder for us to process those signals and to forward them on. That means essentially what usually happens is it takes a lot longer and the outcome is not so clearly defined. So if you're redirecting a part of a website, then we essentially have to reprocess both of those, the old and the new website, and try to figure out what the new status of this website should be. So it's not that we would just pass everything on. We kind of have to recalculate everything. Similarly, I think maybe this question goes in a direction. I was like, what if you only start setting up the redirect a year after you actually do the site move? For the most part, we will try to recognize site moves, even without a clear redirect, but it's a lot harder. And if you wait a year to set up this redirect, then it's kind of a long time already, and things have already settled down a little bit. And what the final state there will be is really hard to predict. So I think adding a redirect later, if you forgot that with a site move, is always a good idea. If you wait very long, like a year, then I don't know what effect you would effectively get from that. I'm trying to move my landing page from WordPress to a new React framework server on server side render page. The new React page gets an almost perfect score in every category in the DevTools audit. It scores far better than the WordPress page. I've copied almost all the content word for word from the WordPress site over to the new React site. I ended up changing my name servers from AWS GoDaddy to Zyte and ended up losing my rankings. I'd love to know how to migrate a site properly. And not sure what the main problem is with moving things over is the name server or DNS records is a difference WordPress adds a lot of metadata and has a lot of plugins. Yeah, I don't know. So from just from a purely technical point of view, like taking aside moving from WordPress to a React framework, it sounds like you're essentially restructuring your website. You're revamping the whole website. And that's probably the part where our systems got confused or where our systems are running into issues. So it's less a matter of what technical infrastructure you're moving across, but more that you essentially have a new website. So the changing of the name servers, I would see that as totally inconsequential. That's something unless the new name servers are completely unreachable by Google, then that's totally irrelevant. But what probably plays a role here is the change of the site itself. So copying the content word for word is one thing, but there's a lot more on a website rather than just the words on a page. There are things like titles and headings on a page. They're images, the way the images are embedded. There is the design of the page itself, the internal linking, the navigation. All of that plays into us understanding what this website is about and how the individual pages are connected to each other. So that's something where sometimes if you take the content and just copy it word for word to something else, then we can still understand this is the same content. But we might have trouble understanding the emphasis of the individual pieces of content. Things, like I mentioned, like the headings or the elements that are somehow otherwise kind of emphasized within the pages. So that's something where I could imagine you might be running into problems there. The other thing is depending on what all you changed, if you changed the URLs themselves, then you'd also need to set up redirects between the old URLs and the new ones. Sometimes if you're migrating from one CMS to another CMS, these URLs change. Sometimes you can keep them the same. If you can keep them the same, that's an optimal situation. If they change, then we essentially have to reprocess the whole website again. The internal linking is the other part that I briefly mentioned, things like the navigation, how you link the various pages and parts of your website together with categories and kind of higher level pages and maybe snippets and overviews, all of those things. That also plays a role in with how we can index your pages properly. So that's something where I'd also watch out for that. The other thing that might be playing a role is with regards to moving from kind of a static HTML version to a more JavaScript-based version. It sounds like you're doing server-side rendering, which would mean that the new website is also a static HTML page when we try to crawl and index it, which means it's more like the same situation. But at the same time, if you're doing this kind of server-side rendering for Google specifically, then there are lots of opportunities for things to go wrong that you might not be seeing as a user. So especially if you're doing something fancy for Google in that regard, then it's something where you just have to check a lot more things. So my recommendation here would be to try to figure out the differences as quickly as possible. So if you still have a backup of the WordPress site, then maybe install that on a different server somewhere so that you can crawl the whole website. You can compare it on a page-by-page basis with the new website so that maybe using some third-party crawling tool, figure out, is the internal linking the same? Are the same URLs being used before and afterwards? If not, is that something that you can still tweak on your new website to make it more the same? How do the individual pages look? Do they have headings, titles? All of these things do they match? That's something that also plays a little bit of a role there. Let's see, the metadata that you mentioned, a lot of the metadata on pages are things that we can ignore. Sometimes they're relevant for social media sites, like finding what is a thumbnail image that should be shown for this page. Those are things, probably worth adding anyway. But that's not something that we would use for indexing and ranking. There are some other types of metadata on pages, though, that we would use, especially if it's something that we would pick up for structured data or if it's something that we would show maybe as a description in the search results page. Both the description and structured data are not things that we would use for ranking, though. But you might see that in that your search results entry looks a little bit different. And if it looks a little bit different, maybe users respond to it a little bit differently. Yeah, wow, so many things to watch out for. But my general recommendation with this kind of situation is to try to understand exactly what's happening as quickly as possible. Because the longer you let it sit in this state of being different and you're not sure exactly why it's behaving differently, the more likely our systems will just learn your new website and say, well, it's a different website. We should rank it differently, but we've learned how to deal with it. Whereas if you can fix these things as quickly as possible, then we can understand a little bit better. Oh, wait, this just tweaked slightly. We can take all of the existing signals that we have and apply them to the new website. Kind of weird. I thought I had a lighter question. Go for it. Sorry, go ahead, I'm on the first. Sure, a bit of a lighter question, one that hopefully has a relatively quick answer. Are there plans in Search Console to bring back the highlighter tool? I'm thinking for people that either don't have access to developers or who are on large enterprise sites where developers are tied up on things. Yeah, so it's still there. It's just kind of hidden away with the old tools at the moment. But it is something that the team is looking into, how we can migrate that to the new infrastructure. So I hope that doesn't go away, because like you mentioned, a lot of times you don't have the ability to just go out and get all of the code changed. And it's sometimes really useful to be able to prove to people that actually it's worthwhile doing. So that's something where I think there's some options there. Another thing that we noticed is with Google Tag Manager, you can sometimes also do similar things in that you can add structured data and let that be picked up like that. It's a little bit more complicated, but it might also be an option. The question I had around the site move is one of like any other SEO guys out there have burnt hands using changing the website, right? I don't think I have yet to see a migration that goes smooth. If you're ranked on top 10, you're getting decent attraction. You want to make it mobile friendly. You always want to do the next step. So is there any plan from Google to provide some kind of assistance in the search console or somehow saying tag, hey, we are migrating a new site. So don't burn the signals. We'll keep the same URLs. Obviously, when you move from one CMS to the other, you set an example of CMS or a different server slash new code, you are going to expect a complete design change. The user can make sure the UI can remain the same or the link slash content. But there is so much that runs behind it. And if Google is going to reprocess it, it's like a new marriage after everything's like start all over again. So I mean, is Google trying to not promote users to come out with new websites and let's give a better experience or what's the, how can you help us better like the whole community? Yeah, good question. I think the general situation with a lot of these site revamps is also that you're trying to improve things. So that's something where I think sometimes it does make sense to rethink the signals because these signals can also improve in a positive way. It's definitely not the case that every site migration that you do will always result in negative signals and kind of things getting lost because a lot of times you will be able to work on a website and you recognize all of these problems and then you create a new structure, a new website. And it doesn't have any of these problems. And you want that work that you worked on to be reflected well in search as well. So that's something where I don't know. It's kind of tricky to say that we should keep the signals and not let a site be able to profit from the improvements there. But it is always tricky because I think the part that always makes it hard for me when talking about this is that it's hard to judge ahead of time what the actual effects will be. And especially if you don't have a lot of experience around SEO, then it's really hard to judge what will actually happen when we switch over. And the other thing that I always see as well is that sometimes these kind of site revamps are done by people who don't actually know much about SEO, who are maybe the developers or maybe the design team or marketing team that says, oh, we need a new website. We will outsource a new website. It looks really nice. It's really fancy. And they don't realize that all of these changes could potentially affect the search visibility as well. Things like removing text and putting it into images. It looks the same in a browser, but it reacts very differently. And as an SEO, when you look at this new site, you say, well, this is a problem. This is something that we need to fix. Um, yeah, it looks. It's weird with someone mentioned that people are having trouble getting in. I don't know. That is kind of weird. Usually I get a little pop-up saying that people want to get let in. But that doesn't seem to be happening at the moment. I don't know. It's hard for me to figure out what exactly is stuck, but. But so there are no plans from Google to kind of help with this problem, right? Like from the webmaster team, I guess. So I think the one thing that we've been looking at is specifically for site moves to have at least some kind of progress indicator that lets you know that when Google has finished reprocessing your kind of site migration, your site move setup, that I think makes it a little bit easier to understand when you've reached the kind of final state where it's not that you're in this uncertainty of, do we just need to wait for Google to figure it out? Or is there actually something that you need to do differently to move into a different situation? Yeah, I think that would be very helpful because when to panic, wait, Google is still working on it. Give it six months. That's fine. The wait is fine as long as we know it's in transition or it's progressing. And going back to the other thing that we talked that you discussed earlier from one of the other users about content. So I had a site, the content wasn't changed in five years, seven years. Certainly the guy is free because of this pandemic. He's like, let me revamp the content. I haven't changed in a while. And he didn't see any positive result. And I kept telling him content is the key. He's like, it didn't change anything. I'm like, I guess we need to wait. And I don't know if there is a temporary removal, just like a new, hey, I can indicate somewhere in the search console, this is a new page. And if Google wants to take six months to settle down, it's fine. It would be nice to see the progress, like you said, site move versus if you tag a new page. I don't know if that's even possible, but yeah. Yeah, I think it's always a bit tricky because there are always some things that change on a website. And it's hard to understand when you just need to let the systems figure it out on a website if there's something that you need to change yourself. Now, I don't know. Good point. I think that's something we can certainly think about and see if there is something that we can do there. One of the things that we did add in Search Console with the new version of Search Console is the general concept of being able to validate fixes. Maybe that's something that we could implement there in general, where you can say, well, I've revamped my content. I would like Search Console to kind of validate the changes that I've made with the validation process in Search Console. At the moment, it's tied to issues that we recognize, which are usually technical issues, like structured data or indexing. And what will happen there is we will try to re-crawl and re-process the website a little bit faster so that we can reflect that new state in the index a little bit quicker. So maybe that's something that we could also expand to more kind of content or revamped situations as well. Yeah, that's exactly what. So unless the user checks in manually, so you are waiting for an input from the company or someone to say, hey, this is new. And then you say, all right, I'm going to do it again. And I'm also going to show the progress this time or something. Yeah. Thanks. Regarding the people that can't seem to get in, I don't know why that seems stuck. But I suspect maybe it's tied to the recording that's running. So once we pause the recording, we'll see if that starts working again. I don't see any other option of adding people unless I have their email addresses, which I don't really have any. Sorry, I posted that. I can get them to try. They tried a few times. Do you want me to get them to try now? Yeah, OK, well, maybe we'll just pause the recording and keep it shorter and see if more people can join in. All right, thanks for watching. And if you found this useful, feel free to hang around on the channel. And maybe we'll see each other again a little bit later. All right.