 All right. Welcome, everyone, to today's Google Webmaster SEO Office Hours Hangout. My name is John Mueller. I am a search advocate at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where people can join in and ask their questions around search and their website. As always, a bunch of stuff was submitted directly on YouTube already, so we can go through some of that. But if any of you want to get started with the first question, like, Barry, feel free to jump on it. Sure. Thank you very much. First, it's no longer the Webmaster Central. It's the Search Central Office Hours Hangout. I know it's hard to say that for what, 20 years, and maybe 100 years in SEO years. In any event, Passage EngineXing was launched Wednesday night, Pacific, or Wednesday afternoon, Pacific time. And there's a lot of confusion in the industry. I just want one clarification. I know Danny Sullivan said it's going to look exactly like any other snippet. It's not going to look like a feature snippet. It's not going to look like a weird snippet, even though you guys had an image of a snippet looking differently in the tweets about it. One, confirm that. And then two, are the patches in ranking snippets using scroll to text feature that are in some featured snippets and other elements? Those two questions. I don't know about either of them. So I think with the first question, I don't really know how that will be shown. And it's hard for me to tell because I never see it here in Switzerland. Danny might know more there. He's been a bit more involved with that directly. With regards to the scroll to snippet part, I believe that's something that we're just still experimenting with. So not necessarily that it will automatically use that or that it's always use that or not. But my understanding is that's still something where we're not 100% sure how that will be embedded in the long run. For specifically passage ranking or for any general snippet? Just in general. Just in general. Do you know if they're being tested for, I mean, would there be a difference for testing that in passage ranking or any other snippet? I don't think there would necessarily be a big difference there. I did see some tweets from people saying, oh, we're starting to see this more in Search Console since you launched this. I don't know if that's more of a coincidence or if that's kind of like directly tied to that or not. OK, in summary, you don't know. I don't know. Yes. All right, thank you. And not even it depends. Yeah. All right, other questions? Maybe I'll know some answers along the way. Anything else to get started with? I can ask a quick one. That's OK. Hey, John. OK, go for it. So this is related to the unavailable after robots value tag. So we're working with a classified site where that tag would definitely be helpful in order to stop Google from keep trying those 404 expired ad pages. It's just that we're concerned that a lot of users are able to refresh their ad. So in that case, it won't expire at the date it's supposed to expire. So they might refresh it for another week or even more. And they might keep refreshing it if they choose to. What is the unavailable after option still a good one would Google, will updating it make Google understand that the date keeps moving forward into the future? Yeah, I mean, if we refresh that page for crawling and indexing, then we would see the updated unavailable after meta tag. So that would work. It's not like you can only specify it once, and then it's cemented like that. If we re-crawl that page and see the new tag, we'll take that into account. So I guess the situation would be kind of tricky if you set the unavailable after meta tag for tomorrow, for instance, and we just re-crawl it the day after tomorrow, then we would have dropped it potentially in the meantime. So is there any way to make sure that doesn't happen? I mean, would using the sitemap and the last modified date help if we choose the available after tag? Sure. Now, I mean, it's something where you could change the last modification date and say it changed now. It's not guaranteed that we would re-crawl the page right away, but it's one of those signals you can tell us. This page has changed. Double check to make sure that you're not missing anything. But what happens if I set the unavailable after tag for tomorrow, let's say, and the crawler knows that from tomorrow going forward, it should not crawl the page anymore. And let's say a week from now, the user decides to reactivate their ad. Will the last modification date in the sitemap make Google understand that? Well, maybe I should re-craw that page because it seems it's not. I mean, the unavailable after meta tag doesn't say not to re-crawl it afterwards. It's more that the site owner is saying this will probably be a no index or 404 afterwards. So it's more that Google can drop it out a little bit faster without needing to re-crawl it. It's not a sign that we would stop refreshing that page. OK, but it will still help with the kind of overall crawl budget stop making Google kind of slow down on re-crawling these kind. OK, that sounds good. John, regarding this, I just wanted to understand if we are using unavailable after tag in seasonal pages like some Christmas day page flights. On that case, does Google have the same page index even after unavailable after tag, or this will drop it from index also? The idea is that we would drop it even without needing to re-crawl it. So our assumption is that if you have the unavailable after meta tag set to a date, that that page is no longer available afterwards, and that it will be no index or 404 afterwards. So our systems, I think the way that I understood it is our systems are essentially going to treat it as if there is a no index on there after that date, even if we don't re-crawl it by then. So it means that the page for all the time when it is not season will be no indexed? I mean, it's something where if you have a seasonal page that has a limited lifetime, then you can definitely use that, especially to let us know about things that expire quickly, to let us know about the long tail content that we don't re-crawl that often. That's really useful, like for instance, classified sites, for example. I think is a really good example for that. Seasonal content itself, like if you have one, I don't know, Christmas page, usually that's going to be very prominent within your site during that season. So we're going to re-crawl that quite a bit. And if you add a no index to that page at some point, we'll pick that up fairly quickly. So usually for that kind of seasonal content, the unavailable after meta tag isn't really critical, but really for the content that we don't re-crawl that frequently. OK, thanks. Hi, John. Hi. I have a question about the robust file. We see that some of our clients, they don't have any robust file on their site. Now, if a site does not have any robust file, does it affect its ranking or indexing or calling? No, it's totally optional to have a robots.txt file. If there is no robots.txt file, there are no restrictions for robots.txt, essentially. So that's perfectly fine setup. And the next question is about the blog post category and blog post tag. The post category and post tag, does it have any impact on ranking of the blog post? Not necessarily. So it's not that we would try to recognize tags on a page, but these are links. And potentially, they go to kind of a category page, your tag page. And that would be another page that we could index or that we could use to pick up links to your articles. So it's not that there is any inherent kind of magic around tags. It's just it creates more links and more pages within your site. Thank you. OK, let me run through some of the submitted questions. We'll definitely have more time for your live questions as well. And if you have any comments or, I don't know, more questions to the questions that we go through, feel free to jump on in. A large site has legacy code that generates parameters on internal links. These parameters are unique for every session. If the site serves Googlebot these pages with the parameter stripped, would that be considered cloaking? Technically, yes. We would consider that to be cloaking. The search engineering team would say, oh, this is cloaking like you shouldn't do it. From a practical point of view, it would not be problematic. So it's not that the web spam team would take manual action on this. It's essentially something where you're kind of providing an optimization for Googlebot that you're not providing for users. And it's more a matter of you're making it kind of hard for yourself to maintain your site because you always have to look at both versions. And if you never see the version that Googlebot sees, it's very easy to run into a situation where suddenly Googlebot gets error pages or Googlebot gets bad links. And every time you check that with maybe a local crawler, you don't see those broken links. So that's something from kind of a best practices point of view. I try to avoid that. But it's not that you're going to get flagged or get a manual action because of that. If a niche news portal is moved to a subdomain of a larger general news portal, can its visibility be affected by the main domain? So for example, before the migration, both sites ranked for top positions on a lot of topics. Now, many times, only the main domain portal is able to reach top positions. And the first page, while the subdomain, has dropped considerably. So any time you do this kind of migration where you're kind of splitting a site into separate parts or you're taking multiple sites and combining them into one main site, I would expect to see fluctuations with regards to kind of ranking in general, kind of temporary fluctuations, but also long-term fluctuations. And it can be that the overall result is something that is much stronger than the individual ones before. It can also be that the overall result is kind of a mixed bag of things and causes problems in the long run. So that's something where it's really hard to determine ahead of time what the actual effect will be when you go into the situation of merging or splitting sites. So from a theoretical point of view, yes, it can happen like this. It is something where you can put in time and get some help from experts and more experienced folks to see, like, should you expect problems or will this probably be OK? But it is different from the general site move situation where you just move one site to a different domain. You're taking everything from one site and just passing it on to a new one. As soon as you merge or split things, then we essentially have to reprocess things overall and try to come up with a bigger new picture for that site. Would adding an audio version of a page's content help with search in any way other than the obvious accessibility improvement? As far as I know, we don't do anything with kind of audio versions of content. We also wouldn't see that as duplicate content, so it's not that you have to avoid that. I mean, duplicate content itself isn't really something you really have to avoid. But even if you kind of wanted to avoid the situation that you're suddenly ranking for the same things with different pieces of content, the audio version is something that we, as far as I know, would not even process separately. So at most, we might see that as a piece of video content and show that also with a video snippet. But essentially, it wouldn't help or detract from a page's overall ranking. John, wouldn't it show us if they've embedded an audio file, for example, to play on the page? Doesn't that do anything? I mean, if you've got a page that literally just has text versus a page that has text, pictures, video, audio, and therefore provides more variety and depth, is that not adding anything to the quality of that page to have an audio file? I don't think we would look at that and say, oh, there are different kinds of content here. It's a better page because of that. It might be that they're indirect effects. Like if users find this page more useful and they recommend it more, that's something that could have an effect. But it's not the case that we look at the types of content on a page and say, oh, two types versus five types. Like the one with five types is better. I think it's a bit different with video and images in that images and video themselves can rank independently. Like in image search or in video search, you can also have the same piece of content be visible in those other surfaces. But for audio, we don't really have a separate, I don't know, audio search where that page could also rank. I think the closest that could come there is the podcast search that we have or the podcast one box thing. But that's really tied to the podcast content type where you have a feed of podcast information and we can index it like that. But just having audio on a page by itself, I don't think that would change anything automatically in our systems. Okay. Okay, a website has a small snippet. Sorry, go ahead. Oh, hi, sorry. I have a question about mobile first indexing. So I just did a, I'm working on a site that just went through a site move. So I actually spoke to you about it a couple weeks ago, but basically it's just a change of domain name. So it didn't change the URLs or site architecture. So the old site was being indexed by mobile, Googlebot, but now it's being indexed by desktop and it hasn't switched over yet. So it's only been a week actually since it launched. So I know it can take a bit of time, but I was wondering if there's maybe a sandbox effect because the domain itself was pre-existing. And it was kind of a park domain, but it was a site that was probably previously indexed by Google. According to the crawl stats report in Search Console, it's currently being crawled 96% of the time by smartphone, Googlebot. So I'm wondering if like how long I guess might we expect for it to switch over to mobile first, or if there is potentially a sandbox effect because it was previously a site being indexed by desktop? So I wouldn't worry about the mobile first part for something like that, because we kind of have the timeline set for switching everything over to mobile first anyway. So that will happen in, I don't know, what is it in March or April? I don't know what timeline we had there. So that'll happen anyway, but with regards to kind of moving to a previously existing domain where there was park content, you can definitely see some temporary effect there. Not so much in terms of kind of like a sandbox effect or something like that, but more in terms of if we've always seen a no index page on this site for the longest time, then probably we're going to assume that it's still no index for a while. And you might see kind of this, I don't know, moment where is for, I don't know, I've seen it happen for maybe a week or two, maybe up to three weeks where it's just our systems assume that this is still a park site and essentially treat the new content that is there as being parked as well. And then it either doesn't get indexed at all or it ranks kind of really badly in the beginning. And then at some point our systems go, oh, it's not no longer parked and essentially it just pops back in. Okay, yeah, I think it's starting to, we are starting to see some recovery with rankings and stuff that had dropped in the last week. But yeah, I was just wondering if there is any sort of negative impact from being indexed by desktop given that we're using a responsive design and it's the same architecture as the previous site. No, no, that definitely wouldn't be the case. So it's not that there is any kind of ranking, I don't know, penalty or anything holding a site back if it was, if it's been crawled with a mobile or with a desktop crawler, that's really just a technical thing on our side. Are we crawling with mobile or desktop and then essentially the same content goes into the index? Okay, yeah, because I'm seeing something that I'm not sure if it's related, which is I'm getting some AMP warnings for domain mismatch and it seems to be like sometimes the old version, the new version of the site has the AMP page indexed but the old page hasn't been re-crawled yet since the move. So I guess maybe the redirect hasn't been spotted yet and I was wondering if that is because of the primary, primarily the crawling is being done by desktop. I don't think that should be a problem because if we understand the connection between your kind of legacy pages and the AMP version of the page, then we would crawl those appropriately. So that's something where I could imagine you might see a temporary effect until all of that settles down a little bit where maybe we have the amp URL somewhere linked and we go off and crawl it with the desktop crawler initially and then we realize, oh, we have to use mobile because it's AMP, then we would pick that up. But that's something that I would expect should settle down fairly quickly. Like, I don't know, order of a, I don't know, one, two, three weeks, something around that range. Okay, and it wouldn't be necessary to say like redirect AMP pages from the old to the new to make it speed it up. I would redirect those too, I would redirect those too. Also images, anything that was essentially hosted on the old domain. Okay, yeah, we've done them for all the other types of pages. I'll add the AMP ones though, thank you very much. Sure. All right, let me just run through some more of the submitted questions first because we always seem to run in short of time for them. I'll definitely have more time for live questions as well. A website has a small snippet of client-side JavaScript that updates the URL with the history API. For example, the server response shows page and the browser address bar shows page dash updated. No resources named page updated is seen in the network resources. Would Google see the updated URL? Which address would the content be indexed under? So if we see during the loading of a page that the history API changes the URL, then we would try to classify that as a redirect and we would try to take the destination URL or the new URL that you provide there and try to use that for indexing for the next time. So essentially what would happen, the first time we would see slash page. If it does the history API swapping with page dash updated, the next time we would try to crawl slash page dash updated and use that as a version essentially for indexing. I mean, it's not 100% certain because it's essentially a question of canonicalization, but we would see that as a redirect. And redirects are a pretty strong sign for canonicalization. The important part here is that slash page dash updated is actually a page that we can crawl. So it shouldn't be the case that you swap out the URL for something that is not existent if you go there directly. So that's kind of the main thing to watch out for. I've written, outsourced, about 600 articles in a year. All were made more or less in the same way. Still, some rank and show in Google and some are not in Google at all, even those that are more than six years old. Then I manually need to go through all those 600 articles and make some changes and update them and then they start showing up in search. What should I do here, essentially? So I think the main thing to keep in mind is we don't guarantee indexing of all pages on every website. In fact, for most websites, we index just a small portion of the total website. So if you have 600 articles on your website, it can be completely normal that we go off and index maybe 100, maybe 500, maybe somewhere in between. It would be unexpected, from my point of view, if we would always index all pages on all websites. Because there's just, I don't know, there's a limit to the number of pages that we can index in our system. So we have to try to prioritize a little bit. And by changing pages on your website, it's certainly possible that you trigger something where Google starts seeing, again, that these pages have changed and goes off and crawls them and sees, well, is there something important that we need to index here? And then maybe it'll index it. Maybe it'll be indexed for a while. Maybe it'll drop out again after a couple of weeks or a couple of months. It really depends. It's not something where there is a guarantee that we go off and crawl and index all of the change pages, or that there's a guarantee of having an old evergreen page will rank high, anything like that. So especially if you're talking about a large number of pieces of content on your site, then that's something where it's worthwhile to figure out a system on your own where you can work to determine the quality of these pages and to work to try to improve those pages over time so that Google also is able to go in there and say, well, these are really fantastic pages. We do need to crawl and index more of these pages. And then over time, that'll pick up again. Let's see, Core Web Vitals question. We know that all user interaction pages or URLs are considered. My question is, does Core Web Vitals need to meet the 75th percentile across all devices or mobile only? The doc says, segmented across mobile and desktop devices, but it's not a clear interpretation, at least not for me. What does that mean? I don't know, Pedro, I think you're here. I don't understand your question. Yes, I am. My question is, the word segmented across devices, does it mean that the ranking boost or the algorithmic factor is going to be factored only to the segmented device or you have to meet across all devices to receive the boost? For example, if you meet Core Web Vitals on mobile only, you can only get on mobile only. Or do you get on desktop too? Or? I don't know. I don't think we have clearly defined exactly how the ranking boost would happen there. But essentially, we would track these per device. And we'd be able to take them into account per device. So I would assume it would not be the case that if your desktop pages are really fast and your mobile pages are slow, that we would still count that as being fast overall. My assumption is that we would try to separate that out. But I don't know what the final mix will be in terms of, do you just have to meet the bar or is there a flexible range that you can reach to see the effects there with regards to ranking? I think for the badging, it's a little bit easier. And I hope we have some more information on that in the near future. But especially when it comes to ranking, it's something where it's oftentimes not something that is just on or off. Yeah. Thank you. Sure. Let's see. Really long question. I have this website with subdirectories on it for Korean and Japanese. And I think the question goes into a little bit, hreflang, do you need hreflang, how the links between these versions work, that kind of thing, and whether these different language versions kind of need to be in line. So I'm super simplifying, sorry. But I think, in general, when you have pages in different languages, in significantly different languages, so in this case, it sounds like it's Swedish content, Korean content, and Japanese content, that's something where we would be able to rank those pages individually. And it's something where if we see someone searching in Japanese, we're not going to show them the Swedish version of your page, because it's pretty obvious that they're searching in Japanese and that your Japanese pages would fit that query base. So that's something where probably, to a large extent, in a situation like this, you would not even need hreflang, because it would not be the case that suddenly, accidentally, your Swedish pages rank for someone searching in Japanese. So I think that makes it a little bit easier in terms of the whole hreflang setup and what exactly you need to do there. But kind of this general situation of if the queries are really split by language, you don't really need to focus on hreflang there. It would be different, for example, if you had something like a global brand, where someone in searching in Japanese would search with exactly the same word as they would search in Swedish. And then it might be hard for our systems to understand, like this user in Japan who is searching for this one name, do they mean the original Swedish site? Do they mean the local Japanese site? It's hard for us to understand. And in a case like that, we would use hreflang to swap out the appropriate country language version for that user. So I have a follow-up question, John. Sure. In the case where you have indicated hreflang appropriately through XML site maps and specifically related to English-speaking countries, what should one do if you see in Search Console that Google's not getting it right? So we've indicated canonicals on both UK and US pages, for example. We've marked our site maps up with hreflang, and we're still seeing Search Console say, hey, we've actually selected the UK version in the context of the US account. Yeah, so that gets a little bit complicated because there are other aspects that come into play there. So the first thing I would do there is try to reproduce that and see if that's actually happening like that. Because Search Console tries to do something really smart that really throws things, I don't know, makes things more complicated in Search Console itself, in that it tries to show the canonical version. And with hreflang, in cases where the same content is available for multiple countries, we can still recognize that actually this is duplicate content will index one version, and we'll use hreflang to show the appropriate local URL when someone is searching. So that's something you might see in English, where you have the same content in English. In German, it's really common that this happens. So that's something we see quite a bit here in Europe. And essentially, what happens is in Search Console and the URL inspection tool, if you check the UK version, if you check that one, you see the canonical shows the US version. That's kind of what's happening there, in that we understand it's the same content. We still see the hreflang there, so we can swap the URLs out in the search results, but we will index it as the US version. And the tricky part is, in Search Console, we report on the canonicals themselves. So in the search results, we might show the UK version, because we know they're equivalent. We can swap out the URLs. But in the performance report, we will track that as the US version in Search Console. So if you just look at the performance report, it's like, oh, the UK version is never shown. But if you try those search results out yourself, then you see that actually the UK URL is shown. But since the content is exactly the same from an indexing point of view, we're kind of simplifying that. So two quick follow-ups to that. One is, I think, a pretty good indicator for a geo mismatch would be looking at a geo-specific Search Console account, looking at the traffic makeup between different geos. So let's say, for example, I look at Canada and I see that 50% of their traffic is coming from the UK. That would pretty clearly indicate that there's some mismatch. Is that appropriate? Yeah, but then you might still be seeing that effect that we're kind of showing the right URLs, but we're tracking it with the wrong ones. So that is super complicated in cases like that. Because if we have like Canada and US versions and we pick one of those as canonical, then we will track even the Canadian URLs when we show them as the American URLs in Search Console. Yeah, it's like you pull out all your hair and you start scratching your head. And it gets really crazy there. From our point of view, it's something where the reporting in Search Console is confusing in cases like that. And you might say it's wrong because it's reporting on the canonical but not reporting on the one that is actually shown. But from an indexing, from a ranking point of view, it should be working out fine. If you need to have it differently, then essentially what you need to do is make sure that these versions of these pages are unique enough so that we don't run into the situation when we say, oh, we will make it easier for you and just pick one version to index. If we clearly understand these are absolutely unique pages, we should index them separately, then we'll show them separately, we'll index them separately, we'll have separate canonicals. Cool, any guidance on enough? There is no number or anything like that. But yeah, I think it's always tricky in a case like that because, theoretically, from a ranking point of view, it's OK. It's just everything around tracking is super complicated. Hey, John. Hi. Hi. This is Mohan Redhi Abbas from India. Actually, I have just stepped into SU industry and I have two questions to you. First one is regarding domain and sub directory. Suppose I have a domain called example.com and I have created four subdomains, like example.1.com, like till four. So if I host the block for all four subdomains on the name, how will it be from the SU perspective? You can do that. I think it's something where some people use subdomains, some people use subdirectories. If you ask the SEO industry overall, you will get lots of arguments in either direction. So I mean, technically, you can do that. From Google's point of view, you can use subdomains or subdirectories. That's perfectly fine. I would, first of all, try to think about what possibilities you have available from a technical point of view on your side. So if your hoster makes it really hard to put WordPress into a sub directory and it has to be on a subdomain, for example, then maybe that's kind of the first way to get started. And then at some point later on, you can decide, oh, I know enough or I can move to a different hoster and I can set my system up slightly different. But that's kind of the thing where, first step, I would look more at the possibilities that you have available and then really learn from making a website, learn from interacting with search. And over time, you'll see, maybe it makes sense to put it into a sub directory. Maybe it's fine like this. Maybe I want to move to a whole different domain. Maybe I want to focus on something different with my website. You'll kind of grow into that a little bit more. Second question is related to backlinks. I have heard a lot about backlinks that Google consider quality backlinks. When it comes to quality, what exactly does you mean for quality backlinks and how Google analyzes between natural and paid backlinks? Yeah. So my recommendation, I think, especially if you're getting started is not to focus on backlinks because it's very easy to get stuck into the situation of, like you said, Google wants quality backlinks or Google wants natural backlinks. Therefore, I will make my backlinks look like quality or I will make my unnatural backlinks look like they're natural. And it's very easy to spend a lot of time focusing on that. So that's something where, from my point of view, I would focus on your site first and really work to build that up really strongly first. And then over time, you'll see maybe there are opportunities where you can mention your site with other people with regards to advertising, perhaps with regards to other ways where you can create something really fantastic and point that out to other people and say, look at this cool stuff that I did. And then they link to your page because they think, oh, this is really neat. And essentially, when it comes to links, Google's point of view is that these should be things that are not organized by you, that are not paid for by you, that are not created by you. But rather, they should be naturally people who say, well, this is really cool. I really like that. Similar to how, if you make a website, you probably have seen lots of other sites where you say, this is cool. I will link to that. I will refer to that because it's something useful for my users. Thank you. Sure. Let me run through some more submitted questions, and we'll definitely have more time for you all as well. Can I prevent a section of my page from being used in the meta description? Google is choosing a component within the product information as a description of a page, which reads really odd in the search results. Yes, you can do something there. Namely, we have a data no snippet HTTP attribute that you can apply to some HTML elements within your pages. And everything that is within these HTML elements would not be shown in the snippet. So we call the description that you see there the snippet in the search results page. Oftentimes, it's based on the description meta tag. Sometimes it's also based on some of the content on the page itself. The important part is also that the description that we show, as well as the title, they can vary for individual pages, depending on what someone is searching for. So if you do a site query for your pages, for example, you might see one title and one description, whereas if you search the way that you see in the performance report in Search Console, you might see that users see something slightly different. So before jumping off and trying to block certain parts of your pages from appearing in the snippet, I would double check to make sure that this is actually something that people are seeing and not just something that only you are seeing while you're trying to diagnose your pages. Will the .gq CCTLD get AdSense approval? I have no idea. I don't know anything about this TLD. I don't know about the AdSense approval process. So I have no idea. How to increase organic traffic on new blogs? I don't think I have a one-sentence answer for that. What I would recommend doing maybe is checking out the SEO starter guides. We have one that we've worked on for quite a while. There are some from some of the bigger other companies out there as well that cover a lot of the things that you can think about with regards to SEO. And there's definitely no single meta tag that you can put on your pages. And suddenly you get a lot of traffic to your site. But rather, you really have to think about this whole thing for a longer period of time. How can a team technically improve SEO when the URL was previously associated with malware? The URL was changed to get away from that, but still running into issues with visibility. Having our content show on Google News. So I don't know about Google News, but essentially malware is something that usually someone who has hacked your website, added to your pages, and provided some bad content for people when they go to your pages that maybe it infects them with a virus or something like that. And when it comes to malware, that's something that is generally flanked by the safe browsing team, I think, and would be shown as a warning in the search results and in the browser as well. And essentially, when that is resolved, so you can also see that in Search Console and you can fix that on your site. And we will automatically recrawl your site every now and then to check if it's fixed. And if it's fixed, then we will remove that warning and everything will go back to normal. Also, in Search Console, you can tell us you fixed that issue and we'll go off and check that. And if it's OK, then that malware warning will be removed. And essentially, the site is ranking exactly the same as before. So there is nothing that is kind of for the longer term holding back a site if it was hacked with malware at one point. Usually, that's something that jumps back completely kind of like on and off switch right away. That said, if a site is hacked and is hacked in ways other than just malware, for example, if someone adds a lot of content to your site that doesn't belong to your site, or if they add phishing pages to your site, or if they add hidden content on your pages, maybe links to other hacked pages or anything around that, then that's something we also try to catch as hacked content. And we try to ignore that as much as possible. But if that remains on your site for a longer time, then we might assume that this is actually something that you want to provide on your site. And perhaps you want to change your website into a Canadian pharmaceutical, like it's possible. It doesn't have to be the case, but it's possible. And in a case like that, we would start ranking your site with that in mind. And if you fix that, which I hope you do find all of those things and are able to fix that, then over time, we will see that as well in the content that we've recrawled and re-indexed over time. But that is something that is less like the real malware where it's like a virus on a page kind of a thing where it's on and off, but more something where over time, if that content is on your site for a longer period of time, we will associate that with your site. And if you remove that, it takes a bit of time for us to kind of de-associate your site from that to recrawl all of these pages and recognize, oh, it's no longer a pharmaceutical, but rather maybe a jewelry store or something else. So that's something where there is no kind of magic thing that you can do to have Google refresh everything for your website, making sure that all of this hack content is really gone is kind of critical, doing the most important step, making sure that it can't get hacked in the same way again is also critical because a lot of times these hacks are almost automated in the sense that there's no hacker out to try to hack your specific website, but rather they just want to hack any website that they can. So that's something where you need to make sure that you don't just remove the hack content, but rather you also remove the hole where they got through. And for both of those things, I would almost recommend getting help from someone who's experienced with hacked sites, which could be an expert that has worked on these for a longer period of time. There are various people who have done that. It could also be as a first step going to the Search Central Help forums where the folks there are very experienced with websites as well who may be able to find some of these issues and help you to clean out any of the remaining hack content because it can get really tricky in that if there is a security issue on your website then multiple hackers could be involved and you might have cleaned out some of the hack content but not everything. It might be that there's still some hack content kind of lingering around that you don't see when you look at it with your browser, but that Google would see when it crawls your website. And getting some tips and help on that can be quite helpful and speed things up a little bit. Oh, wow. OK, still more people joining. OK, we're kind of getting into the last 10 minutes. I have a bit more time to hang around if any of you want to stick around longer. But maybe we'll go through some of the local things here. I see some of you are raising your hands, so I'll just go through the list as I have it there. For, I think, if I got your name. Hi. Just have a couple of questions, short ones, hopefully. So Horweb Weital said that field data is more important than lab data, right? So what happens if we fix our vital 10 days before the update? How bad will that be, or will it be good, or how will Google look at it? Probably we would not notice that. So the field data takes, I think, around 30 days to be updated. So that's something, if you make a short-term change, then probably we would not see that at the first point. But of course, over time, we would see that again. And it's not the case that on that one date, we will take the measurement and apply that forever. So if you need a couple more weeks before everything is propagated and looks good for your users as well, then take that time and get it right and try to find solutions that work well for that. The tricky part with the lab and the field data is that you can incrementally work on the lab data and test things out and see what works best. But you still kind of need to get that confirmation from users as well with the field data. Yeah, thanks. And second question would be, does Google have any preferences when naming pictures? Not preferences directly, but we do in the image search guidelines recommend that you use useful image names, specifically for image search. So instead of just using a number, maybe use, I don't know, some words, kind of describing the image as the image file name. Yeah, we would describe what's on the picture, actually. So that would be fine, I guess. Is there a reason why our English article shows up in discovering the United States, but our Spanish one doesn't show up in discovering Spain? I don't know. I don't know the exact triggering there from a discover point of view, but I could imagine that we might look at things differently, depending on the language. But it's all kind of tricky with regards to discover, because it's so much tied to what we see that users prefer to see and discover. Yeah, that's tricky. OK, thanks. And the last one is, if a page is not active for more than two years, but they hold pretty good positions, what will happen to them? They're not doing actually anything, and we, you know? It depends. Like, some pages remain useful over time, even without updates. So just because a page is not being changed doesn't mean that it's lower quality. Yeah, OK. Cool. All right, Eric. Yes, hello. I have a few questions for you. So first of all, we were discussing this two weeks earlier. If Google sees a newer, more recent article for a topic, let's say a review or a product or whatever, does it mean that it prefers the more fresh page over maybe the higher quality one? Not necessarily. So we do try to take kind of the, I don't know, age or the freshness of content into account for some queries, but it's not the case that that would always apply. And that's something that can also change over time, where if, for example, something happens in the news in one location, then suddenly, I don't know, the fresher content for that location will be more relevant for users. Whereas if nothing has happened there for a while, then maybe kind of the more stable reference content for that location would be relevant. That's what I was hoping for, yes. But we have a few examples of reviews that we think are higher quality on our page than they are on others that rank higher, a lot higher than ours. And it's frustrating. Yeah, I think. Can we somehow debug this or maybe just solve it somehow? Change the content to be, I don't know, full Google into thinking it's more fresh or something like that? I don't think you can really do that. I mean, what you can always do is give us information about these kind of queries where you think the results are bad. But when talking with the ranking team about these kinds of issues, they really need to see that it's really significantly bad. So not like my page is just as good at number five as the page at number four kind of thing. Really, really good. Yeah. One paragraph, five lines of some copied text versus a really big review with photos and everything. So yes, OK. Yeah. We should contact the Google team or how would we do that? What you could do is either post in the help forum. That's one way to do that. The folks in the help forum are able to look at that and give you some tips on that as well. And if there's something that is really weird, then they're able to escalate that to Googlers as well. OK, perfect. I thought it was only the community. And then I have another one. Doesn't matter if we have shared buttons or links to social networks of our site. Does Google take this into account, maybe the live accounts or the community on other social networks? No, no. OK, and I don't know if you can speak about analytics. Not really. OK, but we noticed that we had some issues with analytics for like a year. And I was asking that maybe do Google Analytics filter inputs like hits and page views from other domains? Like let's say our code is on someone else's domain. And does this maybe change our statistics somehow? I mean, I would say that it should filter right now. I don't know. I have actually no idea how that is handled. I vaguely remember that there is some way to set that up, but I have no idea. I would double check in the Analytics Help Forum. The folks there have definitely heard that and might have a tip for you there. OK, perfect. Thank you very much and have a great day, all of you. All right, Darcy. Yes, hey, John. Quick one about the home activities schema. Any idea if that is rolled out outside of the US or plans to do so? I have no idea. Documentation stills just says US. I'm sort of assuming it is, but we'd like it in Canada, too. I don't know. Maybe you have to move. I don't know. Usually the folks that are handling the documentation of all of the structured data stuff, they're pretty on top of things, changes like that. But sometimes the teams also make changes without telling the bigger teams at Google, and then it's suddenly live in other locations. We didn't hear about it yet, but we try to keep that in sync as much as possible. OK, cool. Thanks. Cool. Mohammed, I don't know if you asked a question before or that's something separate. OK. Hamid. Hi, John. Hi. This is Hamid from India. My question, I have two questions, so I don't know whether this platform is the right platform to ask, but it's related to AMP web stories indexing on Google. So we have two, three websites. One is in English, obviously, and then regional sites. But what we have seen is the English articles get indexed very soon, and we get the results as well, but the regional languages don't pick up or don't come on Google search, and we don't get the page views. So is there anything to do? The question is whether there is a preference in languages compared to English and regional languages. Now we have tried to put English text as well in the regional languages, but it's not working. So any preferences are there? I don't know. So especially around web stories, I think there are two aspects that might be playing a role there. On the one hand, we show web stories in that unique UI in the search results, but that's only in certain countries, and we also show them in Discover. And especially the first one with web stories in the search results with that special UI, that's something that depends on whether or not that UI is shown in your country, for example. I think, for example, in Switzerland, it's still not shown. So I would not see that. If someone were to make web stories specifically for Switzerland, that would be kind of awkward, because most people here wouldn't see that. The other part in Discover is probably where you're seeing similar things as the question, like a while back, with regards to English and Spanish content in Discover, where these are essentially just different, I don't know, completely different environments or ecosystems, essentially, in different languages, where it's not automatically the case that if something in English is shown in English Discover for users in English, that any localized version would automatically be shown similarly. So those are essentially completely different things. And it's very likely, I don't know about the languages in India, but it's very likely that users interact with Discover differently across different languages, where maybe in English they look at Discover more, and they see that more often, and your articles have more views, more clicks there. And maybe in some other languages, they just don't use Discover as frequently, and therefore, your pages don't get that many views or clicks there. So what I would recommend doing there is not so much trying to make your non-English pages look like they're English as well, because then you'd just be showing them to people who are trying to find English content, but rather to continue to kind of build out your non-English content, and especially if you know that the web stories are not shown in the regions that you're targeting, then maybe it's worthwhile to say, I will focus more on the regions where I know it's shown for the moment. And then at some later point, when you know that it is shown more in those regions, then kind of put more energy into those versions as well. Thank you so much. Sure. Let me just pause the recording here. I'll stick around a little bit longer. It looks like there's still lots of questions left, lots of raised hands. So we can continue on a little bit, but thank you all for your questions so far. I hope you found this useful and insightful, and that that was a reasonably good use of your time. And hopefully I'll see some of you again in one of the future hangouts as well. All right, so with that, let me pause the recording.