 Let's start. All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where folks can join in and ask their questions around their website and web search. A whole bunch of things were submitted already. But if any of you want to get started with the first question, jump on in now. Yeah, hi, John. Hi. I have a question regarding my eight months old website, actually. It was ranking very good until one month before. It was ranking very well. And it has a few backlinks also from nice websites, good websites. And that's why it was ranking well. But all of a sudden, my website got spammy backlinks from my competitor. My competitor created porn-related keyword backlinks all over the internet. And after that, all of a sudden, my website gone out from the search results. So what preventive measures I can take and how can I recover my rankings than how to proceed from the current situation? OK, so I think if your website is just, let's say, a couple of months old, maybe eight months, maybe a year, then that's still very, very fresh with regards to the rest of the internet. So that's kind of a time where our algorithms are still trying to figure out how and where we should show your website in the search results overall. So that's something where it would be normal to see some kind of fluctuations around how it's being shown in search. Sometimes things go up for a while. Sometimes they go down for a while. And then I'd say over the course of a year, things settle down in kind of a stable state. So that's something where I wouldn't necessarily worry too much about this particular situation. I'd continue working on your website and over time, that's something that should be reflected in search overall. Actually, my question is regarding spammy backlinks. My competitor has created 1,500 websites, backlink websites, created backlinks from 1,500 websites, different websites. And those keywords on those links are spammy, like, partner-related keywords. And I think that's the case. That's something that we see all the time. It's really easy for people to run a script and to drop thousands and millions of links on the internet. And we ignore that. That's something where people get really busy and they do all of this crazy stuff. But our algorithms have seen it so often, they're used to just ignoring it. But actually, that's not the thing I would worry about. But the anchor text doesn't matter on those spammy backlinks or not. If we ignore those links, it doesn't matter at all. It doesn't matter which websites. Whether it is follow or no follow. Whether it is do follow or no follow, doesn't matter. Yeah, no, no. This kind of thing is really easy, where you can take $5 and send it to someone and they'll put thousands of links up on the web. And people have been doing this for years and years and years. So should I take a preventive measure like disowing all those URLs? I don't think you need to. I don't think that would change anything. I would just completely ignore it. If your competitor is focused on building bad links for your website, then at least they're not making a better website for themselves. Yeah. Yeah, OK, thank you, John. Sure. Hi, John. I have a question. Hi, I'm Chris. So thank you for all the help with the webmaster community. So I think my colleague has joined the German webmaster yesterday. And thank you for being there, too. So my colleagues and IIG are facing some display issue for some of our pages in Google search. So basically, for many of our US.com products subscribe pages, Google is putting the metadata such as the metadata, meta-title, meta-description, and structured data for product pricing. But it's not putting it on the URL. So you can see the URL is .com. But we are seeing German and Finnish metadata on our US.com pages. So when we subscribe the URL for the US German and US.com pages in the Google Search Console, we just saw that the self-declared canonical is the German and also sometimes the Finnish pages. So when we look at our source code, for all the US German and Finnish pages, it's actually pointing to our own respective pages. So there's no issue with our HFN tag implementation. And I just have some quick questions here. So one question is, why is Google search showing the German and Finnish metadata on our pages? And also, is it possible that the Google bot is visiting our US site from non-US countries? Is so could the automatic redirect of the search bots be causing the incorrect caching? And finally, will it be acceptable? I mean, will it incur any penalty if we were to exclude the search bots for these Joe IP browser language redirects? As our intention is to serve the best page and language to the correct country users. OK. So I think that adds a little bit more flavor to what I heard yesterday. Yesterday, it really sounded like we were mixing things up across different languages and countries. Usually, when we mix things up with regards to internationalization, it's content that's in the same language for different countries, such as if you have the same German content for Germany and for Austria, then our systems might say, well, it's the same content. We can treat it as one page. But that wouldn't apply if it's German and Finnish and English kind of this kind of makes up. But what you kind of mentioned there is that you're doing some kind of an automatic redirect based on the location of the user. Yes. And sometimes that can be problematic because Googlebot crawls from one primary location for a website. And for the most part, that's probably going to be from the US. That means when we crawl your German pages, your Finnish pages, or whatever, we will crawl them from a US ID address. So if you're automatically redirecting US-based users from the German page to the English page, then we would think that you're trying to fold these pages together and that we should treat them as one page. So that might be something that's happening there. I think from a practical point of view, it's kind of hard to test unless you have the ability to test from different locations. One thing you could do is specifically for Googlebot is to use the Inspect URL tool in Search Console to see which version Googlebot sees when it tries to access the German version or the Finnish version. Maybe that makes it a little bit clearer with regards to what is happening on Google's side. The issue is for US site, when the Search Engine robots are visiting the German .e and .finish pages, they are not redirected. Only when they are in Germany, when they visit the .com pages, they are redirected. So I'm just curious whether, I mean, we are searching it from the US box perspective. There shouldn't be any redirect through the German and Finnish pages for our .com. OK, that sounds better. Yeah, so like I mentioned, we generally crawl just from one location for a website, and usually that's US. So it wouldn't be the case that we would from time to time crawl from Germany or crawl from other IP addresses. It is really, we do it all from the same region for each individual website. And for most websites, that's from the US. What I would do in a case like this is maybe send me some details of what exactly you're seeing, which URLs you're seeing, and which kind of queries you're seeing, the wrong language and country version. And then the site search for you in the chat. So this is the site that we are seeing. So we are seeing actually in the query, there is different version. But you can forget about the parameters. But we are seeing like the English version and also the Finnish and German language. But it's getting better. Like for now sometimes, when we search English query, we actually get the Finnish or German metadata. But for now, today, when we search, it's back to English. But it keeps alternating. So it actually affects the click-through rate for our English users. So what we have thought is, is it all right if we as code Googlebot from the redirect, is there any penalty from our app if we do that? Because what we did is we just want the user to go to the correct version for maybe in Germany. But maybe for the US bot, if they're having problem to display correctly, we can just exclude them from the redirect. I don't think we would see the redirect. Because if you're not redirecting users from the US, then we wouldn't see what you're doing with German users anyway. So that should be something where I don't think it would change anything if you explicitly excluded Googlebot from that redirect. But I'll take a look at the sample that you have there and see if there is anything specific that you can change. Or maybe there's something on our side that we're picking up wrong. Because sometimes it also happens that we pick something up wrong. Yeah, sure. Thanks for that. OK, let me run through some of the submitted questions. And if you all have any comments or questions along the way, feel free to jump in as well. And probably towards the end, we'll have some more time to chat as well. So the first one is about mobile-first indexing. There are websites out there that are mainly used from desktop devices. And the question has a little bit more detail, some ideas about that. And the question is, will mobile-first indexing hurt those websites? They might still be responsive and everything. But maybe they're seeing by Google is not really working great on mobile devices as on desktop. So from a practical point of view, with mobile-first indexing, it's about indexing for us primarily. That means we need to be able to see the content on mobile devices. And if the functionality is slightly different, but the content is the same or the content is what you want to have indexed, then that's all OK. Then you're pretty much set for that. So with that in mind, even if your website is primarily used by desktop users, once we switch to mobile-first indexing, we will index it like a mobile user. And it's not that it will be shown less frequently to desktop users or less frequently to mobile users. It's just we use a mobile version for indexing, for determining what you want to have shown in the search results. So that's something where websites where I would say they're essentially desktop only, like the really old school websites that are built with a tables layout that are not responsive design. Where when you access them with a mobile device, you have to kind of zoom in and try to find all of the content that way. Those are websites that work really well on mobile-first indexing because all of the content is exactly the same on desktop and on mobile. And the display is slightly different, but the content is essentially the same. So those work really well for mobile-first indexing. What we'll kind of play into this a little bit, I think, is once we start using page experience as a ranking factor, within page experience we do have mobile friendliness as a ranking factor as well. So if a website is not mobile-friendly, then maybe that's something that will play a role in kind of the ranking of that website overall. But page experience is something that we've essentially just announced our intent to go down that direction and we'll let you know at least six months before we start using it as a ranking factor. So that's something where you still have a bit of time to try to figure out what really makes sense for your website. The next one is also about mobile-first indexing. We're trying to be prepared as possible for mobile-first indexing switch that's happening in September. How much can we rely on lighthouses SEO mobile results as an indication of what to expect when the switchover happens? So this is kind of a common point of confusion with regards to mobile-first indexing. It's not about mobile friendliness. It's really about mobile indexing. So if you're using Lighthouse to test how mobile-friendly a website is, then that wouldn't necessarily reflect what we would use for indexing. If it's a responsive website where the content is essentially the same, the presentation is very similar, then that's something where probably we will just be able to work with that right away. If you show different content from desktop and mobile or if the responsive design is very different on desktop and mobile, then that's something where maybe you would see changes with regards to mobile-first indexing. So with regards to the way the content is presented, one of the things that I feel people haven't really focused on too much, or where we still see problems at least, is that if you have images or videos on your pages and they're very prominent on the desktop version and on mobile, they're very small, very tiny, in a corner or on the bottom of the page, then with regards to image search and video search, those are less likely to begin good landing pages for images and videos, right? Because the user is trying to search for an image. They find the image in the search results. They click on the page, and then they can't find the image on the page, then that's a really bad experience. So if you care about image search or about video search and you have a website that's switching over to mobile-first indexing, then it's really worthwhile to make sure that your image and video landing pages are really such that the content that you want to be found for is very prominent on those pages. Then a question about Core Web Vitals. There are custom fonts on a page that need to be loaded. It's recommended to use font display swap because it will show the text to the user as fast as possible. Now with Core Web Vitals, it's recommended to use font display optional because we don't want to shake our page when swapping the fonts or where we get a bad CLS score. To be honest, I don't have any experience with regard to that, so I don't really feel qualified to say which exact attribute you should be using for your custom fonts there. What I would recommend doing is maybe pinging us on Twitter, and we'll try to find someone for you that can help you to figure out what the optimal approach here is. Does Google use links as a part of geotargeting and determining the location of a website? From what I've seen, what I've been seeing, geotargeting takes a bit of time before we start ranking in major countries, such as the US or the UK or even Russia. We start ranking in little or non-spam countries with a low population. It's very confusing. So we use links in lots of places around Search. I don't know if we would particularly use that for geotargeting. But in general, with regards to geotargeting, you can tell us the preferred geotargeting of your website, either by using a country code top local domain, where you can essentially tell us right away what the geotargeting is that you want, or by using the geotargeting setting in Search Console. And those are really the primary ways of letting us know which countries you want to target. And all of these more exotic signals that we have to pick up are more for the situations where we don't have good information about a website. So that also includes things like the IP location of the server. If nobody tells us where this website should be geotargeted and we think it should probably be geotargeted, then sometimes the IP address of the server helps us. But if we already have other information for geotargeting, then we don't need kind of those small details as well. With regards to the time it takes for ranking, I think it's completely normal that if you start with a website, then you probably end up ranking first in areas where maybe there's a little bit less competition, where it's a little bit easier to get started. Whoops, let's see. Bit of noise in the background. Cool. We have a website version for the US. However, we're not able to display the same content for all US states. Is there a way that we can tell Google to send us visitors or just targeting only for certain states? If not, how does Google handle the fact that the users from some states are very satisfied with our content, but others from the states were not able to display the content and set up their queries or not? Is Google able to see which states users come from? And depending on that, adjust our ranking for only certain states. So I think, first of all, we kind of have that situation that I mentioned before with the other international sites in that we primarily call from one location. So if you're changing the content for different regions within a country, then probably we would not ever see that. So if you're doing that on a state-by-state basis or on a city basis, we would never really see that. We would probably see the content that is mapped to, I don't know, California, Mountain View, or wherever. It's a bit weird with IP addresses that come from data centers because they end up being shared across the different data centers, and you just assign them to one arbitrary location. So that's something where we would probably just see the version from one location. So that makes it a little bit tricky with regards to swapping out the content for different locations. Usually what I recommend in a case like this is to have a home page where you have some part that is customized by location and some part that is just general for your home page so that when we crawl that home page, we have enough general content to rank your website for. And the more customized content for individual location links out to different versions. So you would have a link to maybe a version for California or a link for Alabama or whatever. And we'd be able to find those links anyway. So we can crawl all of those city or state-specific pieces of content, and we can index them individually so that if someone is searching for whatever your business does plus that state, then it would be easy for us to map that and say, well, actually, this is the piece of content that probably matches what you're looking for. So that's kind of the direction I would head there. With regards to Google automatically figuring that out, sometimes we can't figure it out, sometimes that's a bit trickier. It depends a lot on how easily we can understand what the user's intent is. Is it that they really need something local? And if so, then maybe their location should be a factor that plays a strong role in the query. Maybe it's something where we don't know if the user is looking for something really local. Then maybe we wouldn't know to pick that local version of your page and rather we would pick a generic version or maybe a different location version of that page. Because this is something that is kind of, I don't know. I think we're getting pretty good at it, but it's still kind of hit and miss. It's something where my general recommendation there for international sites, but especially for sites where you're customizing the content by subparts of a country, is to try to recognize the location of the user. And if you can recognize that they're accessing a page that is not the best one for their location, then show a little banner on top so that the user can go to the better version. But they can still access that individual version. By letting the user still access that individual version, we don't have any problems with indexing because we can access all of these different country versions or city or state versions from our crawler in California because we can still access that content. But for users, when they go to that page and they go to the wrong version, they have a little banner on top telling them, hey, actually, there's a better version for you. Probably check it out here. So that's kind of the recommendations that I would have there. When people use Google Gmail Instant Login or sign up and it redirects users away from our website to a Google login interface and then redirects back to our website after logging in, does Google understand that they're not bouncing? Or does it look like people are bouncing away and in turn damage our SEO ranking? The same for Facebook login redirects. So I think there's a bit of misconception here that we're looking at things like the analytics bounce rate when it comes to ranking websites. And that's definitely not the case. So if you have content on your website that is accessible to users once they log in, then using these kind of login providers, I think is a really good thing. From a security point of view, from what I hear from the security teams, this is actually a lot better than if you implement your own login scheme for most websites. Because creating a good and secure login system is surprisingly hard. So my recommendation would be to continue using things like this. And if you're using analytics to track it yourself, then obviously you need to figure out how you want to track that. But it's not something that I would worry about from an SEO point of view. The only aspect from an SEO point of view that kind of comes into play with these kinds of sites sometimes is that, obviously, Googlebot is not going to be logging in to your website. So if there's content that's only accessible after login, we would not be able to index it. So that's kind of the main aspect with regard to these kind of sites that's worth keeping in mind. My rank was number one, for most words. And the impressions on my site increased from 200,000 to 300,000 a day. But there was no change in the clicks. And the next day, my impressions dropped back to 200,000. And my clicks were the same. What could be the reason for that? It's really hard to say. So this is the kind of thing where I would recommend going to the Webmaster Help Forum and giving some of the details of what it is that you've been seeing. Because probably there are very simple explanations for this. But it's worthwhile to kind of talk with other people who have seen similar things. It might be something as simple as, I don't know, maybe an image was being shown from your website instead of a normal text listing. These things can sometimes result in the impressions going up really high. It might be that suddenly people are searching a lot more for your website. Or it could be that your website kind of went from the first entry on page two to the last entry on page one. And suddenly the number of impressions went up significantly. But maybe it's still the case that people are not clicking all the way down to see your site in the search results. So there are lots of these subtle things. But it's worth chatting with other people who have looked at this kind of data before. And showing a bit more about the things that you've been seeing, like which URLs, which kind of queries, what numbers you're actually looking at there. I'm doing a 301 redirect from a subdirectory on my site to a subdirectory on a new domain. Once they're redirected, I will be displaying a pop-up to inform users that they have been redirected to this new page. And once they read it, they can click Continue to move on. Would that affect anything? No, that would not affect anything from an SEO point of view. Because from an SEO point of view, we would see that redirect. We would follow that redirect. But we would not be sending any refer information for all of the requests that we make to those pages. So purely from an indexing point of view, we would probably never see that pop-up. So we would be able to index the new content right away. We'd be able to follow the redirect forward the signals to the new page. And all of that would essentially just continue to work well. I don't know how well this works for your users, but ultimately, that's something between you and your users where you probably want to make sure that you're not getting too much in the way, but that you're informing them appropriately if that's something that you're trying to do then. I work on a site with separate mobile URLs. And recently, it switched to mobile-first indexing. How would a page benefit in any way from getting external links to m.example.com rather than the desktop URL? Have you ever tested? Or can you speculate what would happen if the canonical became the m.version on both versions and the real alternate added to the mobile version pointing to the desktop version? So with separate mobile URLs, or when we switch to mobile-first indexing, in general, with separate mobile URLs, even before mobile-first indexing, we take the m.version and the desktop version, and we fold them together as one page, essentially for indexing purposes. So any signals that go to either of these versions would be combined and focused on one version. Before mobile-first indexing, they would be combined on the desktop version. With mobile-first indexing, they would be combined on the mobile version. And what will happen from a practical point of view is, internally, we will pick the m.version as the canonical URL. That's the one we'll use for indexing. And we'll see the desktop version as kind of an alternate URL for this web page. So when someone on desktop is searching, we will know instead of ranking this mobile URL, we should be showing the desktop URL, that kind of thing. So internally, we would be picking the m.version as a canonical and kind of working from there. However, externally, from your side, you should be treating things as they were before. So that's kind of a bit confusing there, in the sense that you should continue to have the link rel canonical from the mobile version to the desktop version. You should have the link rel alternate from the desktop to the mobile version. The main reason we decided to keep that is because we're using this set of connections to recognize that these pages belong together and to recognize which one is the mobile, which one is the desktop version, and to keep it such that you don't need to watch out for when we switch to mobile first indexing, and then suddenly you have to change your whole website around. So we prefer that you keep kind of the normal connection between mobile and desktop with the kind of canonical pointing to the desktop version and the alternate to the mobile version, even if internally, we treat it the other way around for indexing purposes. So that's sometimes a bit confusing if you look at the details a bit too far. But essentially, our goal here is to make it so that you don't have to make any changes, but rather that we try to work out the changes on our side. And I guess just to be really clear with the first question with regards to getting external links to the MDOT version, that wouldn't change anything at all. So with mobile first indexing, without mobile first indexing, we combine all of those signals. It doesn't make sense to do any kind of separate work to promote your MDOT version over your desktop version or vice versa. It's essentially we combine all of those signals. An author on a prominent industry news site recently wrote that the rel canonical tag doesn't consolidate external link signals to the canonicalized page among a group of duplicates in a situation. Wow, this is complicated. In a situation where external links point at each of the pages in that group of duplicates. So I haven't taken a look at this article, so it's really hard for me to say what exactly was written in that article. But with the rel canonical, what happens there is we essentially use this as a signal for canonicalization. And for that, we need two steps. So on the one hand, we need to be able to recognize that these pages should be treated as one group. And we can do that by looking at the content, by looking at the pages. Sometimes we can recognize it in the URL if there are URL parameters that we can drop. Those kind of things, we also look at the rel canonical. So all of these things kind of come together. And then as a first step, we say this set of pages should be treated as a group. We should pick one of these pages from that group and treat it as the canonical URL. And then the second step is picking the canonical. And for picking the canonical, we use a number of signals as well. I think we have these documented in the Help Center somewhere, so things like 301 redirects. The rel canonical definitely helps us here. Internal and external links, things like site map files, hreflang links, all of these kind of extra annotations where you tell us a little bit more which of these URLs you actually want to have indexed. And we take all of these signals together and we try to weigh them appropriately. And we try to pick the right URL to show from this group of URLs that we think are essentially the same thing. So that's essentially what happens there. And in the situation where we put all of these pages into the same group and the rel canonical is one of the signals that aligns with the preferred URL that you want to have shown, then essentially all of the signals that go to any of the pages in this group, they're concentrated and focused on that one canonical URL. So that includes things like external links. That includes any other signals that we have for that page, also internal links. Everything is kind of combined into that one single URL. But depending on the situation, us recognizing that specific pages should be a part of this set of pages that we group together, that's something that might be kind of tricky if the content is very different, for example. And also with regards to selecting the canonical URL from that group of pages, that's also sometimes a bit tricky. That sounds easy, but it's not that easy to do in practice. And it's something where we I would say every couple of days we email a team that works on canonicalization and ask them, why did you pick this one instead of that one? Because people just have so many unique configurations on the web where our systems have to deal with the whole web. And sometimes we pick things in ways that don't make a lot of sense to humans when they look at these pages. Let's see. Question about content length and schema structures. Imagine a website that's presenting a feature about a specific topic. Along with the feature, there is a schema markup FAQ or how to section. Is the word count of these additional sections taken into account for determining the word count of the feature? Or is the length of the feature just determined by the feature itself without the FAQ or how-to sections? So I think maybe as a general comment, it's very easy to focus on word count on a page. And it's very easy to focus too much on word count on a page. So from our point of view, we don't use word count as a ranking factor. It's not that you have to hit a specific word count, and then suddenly your page is deemed as high quality. Sometimes really short pages are really good for users. They perform really well. Sometimes really long pages are what users are looking for. That's something where you kind of have to find that balance yourself. It's not that from our point of view, we would say, you need hit 1,000 words or 5,000 words. That's really kind of an arbitrary judgment that I think doesn't apply in a lot of situations. So when I tend to see questions coming in where people are saying, my high quality page has 4,500 words on it, you should be ranking it better, then my first thought is maybe you're focusing too much on the number of words on the page rather than the quality of the content that you're providing there. So that's kind of my thought here maybe as well in that it's very easy to count the words on a page and to say this is so important because I have 2,000 words on this page. But actually, our algorithms don't look at the number of words. So you kind of have to take a step back and think about the page overall. And with regards to different structured data elements, it's the same thing. It's not that the word count matters. It's really that the value that you're providing is what's important here. That said, with regards to FAQ and how to, and maybe some other kinds of structured data that are not 100% sure of which ones might also apply, maybe special announcement would be similar, it is something where we need to have enough text to show in this kind of rich result box for a page for us to actually show it. So if you have a FAQ section and the question is why and the answer is because then probably our algorithms are going to say, well, this is not very useful. Maybe you should kind of elaborate a little bit more before we start showing this in the search results. So that's something where maybe you'd want to look at the size of your questions and answers just to make sure that it kind of aligns with what would be shown in the search results. But otherwise, I really wouldn't worry about kind of the word count of a page. Some websites use different approaches. User behavior for desktop, mobile on AMP, like on Mweb, browsing using layered approach. You click on a page, a page layer opens with less content compared to the web version on desktop, a regular click to URL with a redirect to regular web page and on AMP. You click on a URL, since AMP doesn't give you flexibility to do so. Will this come under good or recommended practices? I don't know how much it makes sense to do such vastly different approaches on different devices. So that's something where I would try to be a bit thoughtful on what it is that you're trying to do there, rather than just doing whatever you can do with regards to your technology. It's very easy to kind of similar with word count to focus on a technology and say, well, I can do all of these crazy things. Therefore, I will do them, and it looks really fancy. But is that really what works best for users? I don't know. Maybe it is. Maybe it works well for your users. With regards to the different versions on different pages, the thing to keep in mind is with mobile-first indexing, we will be indexing the mobile version of the page. So if the mobile version of your page does not use normal URLs for navigation, then we will have trouble indexing your mobile site because we won't be able to access those non-urls to get to the content. So if the whole navigation on the mobile site is really purely JavaScript-based, you're swapping out different layers on a page, it stays on the same URL, there are no actual links on this page, then we would probably have a lot of trouble being able to crawl and index that site. And probably we would not shift that site to mobile-first indexing at the moment. But at some point, we're going to kind of make the decision and say, well, we need to shift everyone over. And that will mean we'll shift your site over as well, even if it's not ready yet for mobile-first indexing. And if really all of the content on your mobile site is not accessible through normal URLs, then we will drop that from the index. So that content won't be shown in the search results at all. And with mobile-first indexing, it doesn't mean that it'll be dropped for mobile users. It'll be dropped for desktop users as well. So if that content is not accessible on mobile, we will not be able to index it. With regards to AMP and mobile and desktop, we treat AMP pages as being alternate by default. So we wouldn't use the AMP page for indexing. There are certainly ways that you can set up a website so that the AMP page is your mobile page, and you have an alternate desktop page where you just have a desktop version and the AMP version. And in a case like that, we would see the AMP page as being a normal mobile page, and we will just index that normally. The other thing that kind of plays into this complex scenario of mobile desktop and AMP is that when it comes to the page experience score, kind of the core web vitals where we're testing the speed and stability, usability of a web page, we will test the version that users end up seeing. So that means we would test the AMP version with regards to speed and kind of the quality usability. From that point of view, we would not test the mobile version for that. So that's something where in this situation, with mobile-first indexing, we would index the mobile version. We would use the AMP version with regards to testing usability and speed, and we would show the desktop version in the search results as an alternate URL when people on desktop are searching, but we would not index the desktop version. So it gets really complicated. My recommendation for this kind of situation is to try to simplify things as much as possible for your website. And instead of having three different variations of the same page, find a way that you can use some kind of responsive design so that you either just have two versions, maybe kind of like desktop mobile combined and the AMP version, or maybe there's even a way to move to a pure AMP framework where you essentially use AMP for the whole website, because AMP is a responsive web framework, so you could theoretically do that for a lot of things as well. John, a question on that. Do you think then, or is it the websites that haven't yet shifted to mobile-first indexing? Are websites that need to do something about that? Will you be notifying websites that need to do something about that in upcoming months? Yeah, we've been notifying sites for a while now with regards to the bigger issues that we've noticed. So that's something where I expect us to continue doing that. With regards to mobile-first indexing, we recently did a review with the mobile-first indexing team. And it seems that most of the sites that are remaining kind of fall into two buckets. On the one hand, the sites that basically don't make any changes at all, which are probably a lot of sites that are either just created once and then kind of left, or maybe sites that are just kind of, I don't know, gone stale over the years. That's kind of one big bucket. And the other bucket is really large websites that have complicated setups. Maybe that's something similar to your website that you're working on there. And that's something where we're looking into ways to make it easier for us to notify them of specific issues that we're still seeing there. Great, thanks. I just had another further question. It was about we've seen some changes over the last few months. So since April, Googlebot's slowing down its crawl by about 40%. And this is across all of our classified sites. And is that something that's a global initiative that you guys are working on, or is it something site specific? Probably more site specific. So I'm not aware of us slowing down crawling in general. I think that would be kind of tricky, because the web is still growing. Content is still coming and going. Maybe it's just that we're focusing our crawling a little bit differently in between. But in general, we should continue to be crawling normally. Hey, John. Hi. How are you doing? You're doing great today, right? Pretty good. All right, good to you. John, I have a question about regarding the core web viral. Using a page as a ranking factor, it means you need to be able to have in the user data to be able to tell these pages fast, these pages slow. Is that right? So when a page doesn't have a significant amount of traffic, how does it handle in this scenario? Yeah. So for the core web vitals, we do show a kind of in-search console and in-page speed insights if we have enough data from the Chrome user experience report. So that's something where if a website really has a low amount of traffic, then we wouldn't be able to show anything there, and we wouldn't be able to take that into account. But that's something that's very similar to other kind of ranking factors that we have. For example, if a website is really new, then we don't have a lot of kind of historical data about the website, and we kind of have to guess. So that's something where from our point of view, we would be able to take this situation and still say we don't have any speed data about this website. Maybe it's a completely new website. Maybe it's something where there's just not a lot of traffic to this website, but we still have to rank it somehow in search. So it's not that we would remove it completely from search. We would just say, well, for this particular ranking factor, we don't know. We have to make some assumptions, but we will try to compare it to other websites anyway. This is all right. Thanks. Yeah. So I wouldn't worry about it if it's just that you don't have a lot of traffic yet. If you're working on your website then over time, probably you will have a bit more traffic. So hopefully that kind of settles down, but it's not something where we would say, well, we would never be able to show this website in search. So it makes traffic in the ranking factor in the scenario. We could say that. No. I don't think that would be safe to say, because it's really just about us having some amount of data. It's kind of like saying, well, age is a ranking factor, because if a website is completely new, Google doesn't know how to rank it. Therefore, it's not old enough to rank. But we have to deal with these kind of situations where we don't have data all the time. There are lots of ranking signals that we have where we just don't know for individual websites. And we kind of have to guess. And especially with new websites, people see it in different ways. So some people talk about kind of the honeymoon period where Google is ranking my website very highly, because it's a new website. And other people talk about the sandbox where it's like, well, my website is new, and Google is not ranking my website at all. All of these situations fall into the general bucket of, well, we don't have data for this website, for this particular scenario yet. So we have to make some assumptions. And we will see if those assumptions were OK. Hi, John. Hi. Hi. So my question is related to Google Search. So what I have noticed recently is that if you put the title and the date, the content is ranking on the top in featured snippet, like for keywords like, let's say, that phones under 10,000, phones under mobile phones under 15,000. If you look at the top ranking keyword, what the sites are doing is that they are just manipulating the date every day, every 24 hours. And these things are coming on featured snippet. But if you look at the content, it's hardly anything new. It's just the date is changing every 24 hours. So is it a ranking factor, or is it a bug, or what is your take on that? It's hard to say without looking at some examples there. It could be that it's something that we should be able to improve. What I also notice with a lot of queries is that it's very easy for a site owner to get fixated on one specific query and to start seeing problems there. But when we look at that on our side, we see, well, 10 people search for this query last month. And then from our point of view, it comes across as something, well, maybe we don't need to spend a lot of engineering time to improve this particular query or this particular kind of query, because it's just so rarely used. It's something that a site owner would use when they're copying, pasting their titles or when they're explicitly checking something to see if a competitor is ranking for something exact. Then you might see that. You might look at it and say, well, it looks really bad. And Google is getting misled by this kind of guy who's trying to manipulate the search results. But if you're the only person who is seeing this, essentially, maybe a handful of other people, then maybe that's not something that you need to worry about. And maybe that's something that our engineers don't need to worry about. But it can also be that it's affecting a query that is very visible and where we should be doing a lot better. So if you can give us some of those details, that would be fantastic. You could drop them here in the chat. I'm happy to pick them up and pass them on to the team. I think someone has already posted on the chat, John. I just see someone by name on digital. He has already given some examples, I guess. So I just read his thing. And it's happening in research in India. So I just noticed it. So I thought I'd just ask him. Thanks, John. Sure, I mean, I'm happy to take a look at these things. I always get to transcript afterwards. So it makes it a little bit easier to double check with the teams on that. Cool. OK, let me pause the recording here. If any of you want to stick around and kind of chat a little bit longer, you're welcome to do that. But just for sakes of having a reasonable cutoff for the recording, thank you all for joining in. Thanks for all of the questions that you submitted. It's always good to see a bunch of new faces as well, and wishing you all a great weekend, and maybe see some of you one of them future times as well. Bye, everyone.