 First, I have to jump out. All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, which I guess are not in the office anymore, but rather at home at the moment. But whatever, we'll improvise. I hope you're all doing well and safe wherever you are, whichever place at home. I guess pretty much everyone is at home nowadays you're at. We have a bunch of questions submitted. But if any of you want to get started with the first question, feel free to jump on in. Hi, John. Can I achieve this slide? So John, I want to ask a question about the crawl depth of a website. So what happens is I was working on a website and it has approximately 200 blogs. So when I initiated the crawl on it, I used Screaming Prog. And what I found is the blogs are dig deep in the IA. That is, their most profound blog was on eighth crawl depth. So how could I cope it? And what is the best practice for it? What could you suggest? In general, with crawl depth, you need to find a balance between having a flat architecture and a really narrow and deep architecture. And that's something where there is no hard rule where you can say for technical reasons it should be like this or like this. But rather, we generally try to crawl things that are closer to their home page because usually the home page is the most important page. We tend to crawl things that are linked from there a little bit more frequently. And that kind of bubbles down. So depending on the website, whatever page happens to be the most important page for that website from there is kind of where we start looking at the crawling. And that essentially happens through the links on the page. And if we find pages that you think are really important, just very far away from the pages that Google or your users think are important, then for us, it's hard to understand that this is actually an important page. So when it comes to crawl depth in a situation like this, if this is something that you find is really important for your website, which might be because there's important information on there, it might be because you're doing a special offer at the moment, it might be because you make the most money from a certain product, then I would just make sure that it's really visibly important on your website. So link to it from higher level places within your website's architecture. That could be from the home page. That could be from category pages, things like that. So that's something where you kind of have to use your judgment and say, well, this is an important page for me. Therefore, I want to make sure that everyone, users and search engine crawlers, understand that it's important. OK, so just to follow up, I just want to ask one thing, what would be the better approach? Should I categorize those blogs into certain categories? Or should I use breadcrumb implementation for that? Or just simply categorize into them? That's totally up to you. Sometimes it's enough to just categorize the blog posts that you have kind of higher level category pages, which are easier to crawl and find. Sometimes you can take individual posts and just highlight them on the home page of the site. And say, these are my most important posts or my most popular posts, something like that, and link to them a little bit more prominently within the website. But how you do that is up to you. And it might also be that this kind of 8th level in that particular site is also OK. It's not necessarily a sign that 8 is bad or anything like that. John, I will follow up on that. Sure. You said, I think let me try to interpret your tweets. Basically, you said something about the way to visibly see, according to Google, if Google thinks you have a flat architecture, is if you don't have site links, show up for your website. Can you clarify that a little bit if I miss something? It's not so much that they don't show up. But with site links, we try to figure out what the context is of the other content on your website. And if we can really tell that a handful of pages are really associated with this one page that's shown visibly in search, then we might show that as a site link. Whereas if everything is really flat, if it's like everything is equally important and there's no hierarchy at all, then it's really hard for us to say, these pages belong together with that other page that we're showing in the search results. So not seeing site links is kind of hard to say that that's just because of the flat architecture. We don't show site links for lots of pages. But showing irrelevant site links, that's a really clear sign that we don't understand the architecture of the website. And your hierarchy is kind of maybe too flat for us to understand which pages belong together. Cool, thank you. Cool. OK. Any other questions before we get started? John, with movie schema, is that intended for movie theaters? Or is it, I should say, is it appropriate for a site that just has information about a movie? Does a review has information on the actors and directors and that type of thing? Is movie schema appropriate for that? I think so. OK. I'm just peeking at the documentation and it feels like that's kind of one of the big use cases, like you write about movies, you review the movies, those kind of things. OK, great. Thank you. No. All right. Anything else before I jump into the submitted questions? I got another question about a suite you set the imposes. OK, go for it. About manual actions, you basically said it could happen where you don't include sample URLs. What scenarios would you guys not include sample URLs in a manual action? I don't know. I gave some assumptions. I can tell you what I think. OK. I mean, I'd have to totally guess because I don't know. I don't know that part of the system that well just from kind of day to day interaction with the teams. My guess is if it's something where it's purely site level, then probably it doesn't make sense to show individual URLs. Whereas if there's something really specific on individual pages, then we probably should show URLs. But I haven't looked into that in a while. OK, thank you. But if they never add anything to the email and you reply back, they send you an email saying like I had one recently where they would reply back and then kind of show what the issue was. So we try to do that. So that tells me, OK, there was a human interaction there. It wasn't, I mean, it's good to see that nowadays. We're not all robots. Yeah, exactly. Yeah, like we try to do that where we can tell that there is a good faith effort by the site owner that they're trying to fix something. And maybe they just need a little bit more information than we try to do that. I assume just for time reasons it's not possible to do that with every manual action and reconsideration request that comes in. Exactly. All right, let me run through some of the submitted questions. And you're welcome to jump in in between if you have any related questions or need any more clarifications. OK, for multilingual website, if we have different language versions with one domain under different directories, so slash EN, slash FR, et cetera, if we have hreflang set up in place, does it matter if the domain is a cctld, like dot de? Would it be better to move the website to gtld, like dot com? For just purely multilingual content, if it's different language versions, it doesn't matter if there is a cctld or a gtld. On the other hand, if you have content that is specific to individual countries, so for geotargeting, then that would be something where hosting it on a cctld makes it a lot harder because you can't specify the geotargeting for other countries. So using a gtld, a generic top-level domain, like dot com, would be a lot better there. But like I said, if it's purely just different language content, if it's not specific to individual countries, host it wherever you want. Is it true that Google uses font size to detect headers? For example, if my h2 is 35 pixels and my h1 is 25 pixels, then Google is going to say, well, the bigger phrase is more important, and use the h2. Or second, do you use this technology only when there's no h1 to six tags on a page, like if there are divs and spans? I think we do a combination of all of these things. So on the one hand, we do try to understand the headings on a page, and headings are ideally flagged with the h1, h2, h3 tags, where you can semantically tell us what the heading of individual pieces of content on a page is. And that's really useful for us. On the other hand, we also render the pages so we can see which text is actually visible, which text is not as visible. We see the overall layout of the page as well to better understand how things kind of work together there. So from that point of view, if you're trying to optimize at this level, which probably does not have a giant effect on your website, but if you're trying to optimize on this level, I would make sure to use proper headings, and I would make sure to display those headings in a way that is relevant for the user as well. So that's kind of the direction I would go there. If we don't have any headings on a page at all, then obviously we have to guess. And sometimes we guess right, and sometimes it's pretty hard. For what it's worth, especially with headings, that's one area where we found some trouble with switching to mobile-first indexing on sites because, for whatever reason, some sites don't like to use headings as much on the mobile version of their pages. And in those cases, we do lose a little bit of information there. So that's something where, especially for images, if we don't understand which heading this image belongs to, then it can be a bit tricky for us to show it appropriately in the image search results. It also has accessibility implications. You want to make sure that people on screen will just get a structure or to run down, and that doesn't work with font sizes. It doesn't matter for SEO as much, but for accessibility it does. Yeah, that's definitely another thing to keep in mind. I always forget to explicitly call that out, but it is something where it's good to watch out for these kind of things. A web page schema natively has breadcrumb schema field built into it. The web page, not the schema, but the rendered one possibly includes a header with a menu and a footer and a menu and so on. Doesn't make sense to add these header footer elements to be nested within the web page schema or separately as their own schema since it's not natively included in the web page schema. Wow, that's a confusing question with lots of web pages and headings and navigation elements. So I think just first off, probably this makes absolutely no difference for us at all when it comes to search. So that's something where if there's a specific use case you have in mind with regards to the schema markup on those pages and that use case works with this kind of a setup, then obviously go for it. On the other hand, if you're mostly worried about how Google would interpret this and use it for SEO reasons, then I would spend my time somewhere else because this kind of level of structured data nesting and integration within a page is not something that we would show in the search results. So it's very easy to spend a lot of time on something like this and to focus on lots of theoretical variations of how this could or should be nested. But if it doesn't have any effect at all in search and what you're really trying to do is achieve something for search, then probably it's better to spend your time somewhere else. With regards to how it theoretically would be done, if you care about the schema.org variations, I honestly don't know. You probably need to ask someone on the schema.org side. I believe they have a Google group where you can probably ask this kind of a question. But again, for SEO reasons, I don't think that changes anything at all. When a bot is scanning a page and finds schema markup, does it validate the schema accuracy through schema.org, which means sending a bot to validate for every schema? Or does it validate via Google's own database? We validate the structured data with our own systems. So I believe we regularly sync those with the schema.org setup. I don't know what the technical implementation is with regards to syncing the kind of the schema specifications, but that's something where we sync with the schema.org setup and we have our own set of rules as well with regards to which fields and values are required or optional or what kind of restrictions apply to individual fields. So that's something where that essentially happens on our side in the background and is not something that is done one by one when crawling and indexing the way. So that's another reason also why using HTTP or HTTPS when it comes to the JSON-LD markup that you have there doesn't matter so much for us because we understand HTTP schema.org is the same as HTTPS schema.org and it just tells us we should use our validator for structured data. It doesn't mean that we need to double check if this HTTP or HTTPS URL is the right one. So that makes it a little bit easier and allows us to crawl and index a lot faster. There's Google, which remains very strict with Webmasters with regards to correct information for users, do the same with news sites in some manner. It is often noticed that many news sites nowadays publish fake news regularly and move to next paid news when the truth comes out frequently without clearly declaring the said news came out false with relevant apologies. That's a complicated sentence. Does Google downgrade their rankings based on some trust sites logic? So I'm not aware of us explicitly looking for this kind of information within Webpages to see like, is this the correct information or not? But I'm pretty sure you would see a lot of indirect effects there, where if a website regularly posts information that is not correct and users end up not trusting that website, then that's something that would probably be reflected in our search results over time. So that's not that Google's algorithms goes out and kind of fact checks the math on all of these pages. And if it's not correct math, then we will demote them in search. But it's more that, well, we see that there are lots of good signals for this page and not a lot of good signals for another page. So we'll try to rank the good page a little bit better. If we have critic reviews with user reviews, how should we implement schema, one schema for user reviews and one schema for critic reviews? Which ratings will Google display in the search results? What's the best practice when we have both kind of reviews? So my understanding is that both of these types of structured data would be shown slightly differently in the search results. So my general recommendation here, when you're faced with a situation where you have essentially different kinds of structured data on a single page, where only one of them is shown in the search results is to try to make a decision yourself. So instead of kind of implementing all of this markup on a single page and saying, well, I hope Google chooses the right one and you don't really know which one is the right one, then I would make a decision and say, well, I want Google to show my critic reviews or I want Google to show my user reviews, therefore I will explicitly mark those up and maybe not mark up the other parts. So when you're seeing this kind of situation where multiple kinds of search results are possible but you can't combine them, then pick what you would like to have shown and be as clear and direct as possible with regards to that so that our systems can pick it up and say, oh, these are clearly user reviews. They're marked up properly. We should show them in search. A question about adding noindex tag to 404 pages. In my understanding, if a page was removed and has a 404 status on it, it's already telling crawlers all they need to know. So adding a noindex tag isn't really adding any value, apart from the speed of removal from the index. But if my clients already included noindex tags on the 404 pages, are there any negative consequences of that set? So the easy answer is when we see that a page returns 404, we ignore all of its content. So whether or not it has a noindex on there, whether there is, I don't know, a really fancy picture on there or not, whether the 404 page is essentially just a text 404, or if it's a really fancy 404 page, we don't see any of that at all. When it comes to search, we see the status code 404, and it's like, that's OK. That's enough for us. So adding a noindex tag there wouldn't change anything at all, neither positively nor negatively, nor would it speed up the removal from the index. As soon as we see the 404, we understand, well, this is probably something we don't need to index. So with that in mind, if they already have a noindex on the 404 pages, that's fine. If they don't have it there, you definitely don't need to add anything. If a web page has millions of URLs with parameters that don't change the content only slightly, do you recommend to configure in Search Console these parameters? Is it strongly recommended? I think it really depends on the website itself a little bit. To a large part, we try to learn these parameters automatically. So often, if you go into Search Console into the tool, you'll see Google has already determined that these parameters are less important or more important, for example. And in a lot of cases, when we have worked with a website for a while, then we'll figure out which parameters are important, which ones are not important, and treat that appropriately automatically. If you're starting out new with a new website or you're adding new parameters to an existing website and you really have millions of URLs that use these parameters and maybe thousands of URLs that don't use any parameters, which are the normal content that you would like to have indexed, then using the URL parameters tool is a great way to let us know about that. So that's something where I would use it, especially if you're starting out, especially if you're making bigger changes on your website. But if it's been running like this for a number of years, probably we already figured that out. On our Price Comparison website, we also have shop and seller who sell products on their site related pages. On the shop pages, we have shop info and ratings and reviews. So we've implemented organization markup with reviews and ratings in February. But still, we have not gotten any impression in rich results yet for those pages in the search results. I'm seeing our competitors are displaying rich results. What can we improve here? Should I implement rating and reviews in separate schema because that's the only thing I want to see with our shop pages in search? So I hope the review stars are not the only thing that you want to see in search, but rather kind of your content should be visibly indexed anyway. That's kind of the most important part. Looking at the review markup, one of the things that has changed recently, I don't know if that was in February or sometime before, everything is a blur. That's more than a couple of days back. One of the things that has changed there is that we only show the review markup for a limited number of objects. So that's something where maybe you're marking something up with a review that is not one of the supported types. And that might be a reason why we might not show that in the search results. The other thing to keep in mind is that, especially with review markup, the reviews should be specific to the primary object of the page. So if you have a shopping comparison site and you have one page with lots of products on it, and it's just from this one company, for example, then that wouldn't be relevant to mark that up with the review markup, because you don't really have one primary item of a page. You have lots of different items. Whereas if you have one product that you're selling, which happens to be from one manufacturer, and the reviews are all specific for that one product, then that would be a good use of the review markup on the page. So structured data should be as specific as possible to the primary object of the page and not something general for a page. Being a newbie blogger, how much time will my site take to rank in the Google search results after implementing both on-page and off-page SEO techniques? So I don't think there is any specific amount of time for a site to rank in the Google search results. And it's really kind of a tricky question in itself, because often we just index websites as quickly as we can, but that doesn't mean that we show them as visibly as possible. So everything around ranking depends a little bit on what else is happening in those search results. We might be indexing a page within a couple of hours, couple of minutes, depending on the website. But if you're targeting a topic that is very competitive in nature where other people have spent a decade or longer working on their website to improve it, to make it as really as fantastic for that one question as possible, then it's not going to be something where you can just implement a little bit of markup and do some off-page SEO techniques to suddenly rank above these other people who have worked so much and done so much good stuff to appear in the search results. So that's something where there's no fixed amount of time. It doesn't mean that you'll never be able to show above other people in the search results, but it does mean that when we look at the search results, we need to figure out what is relevant for the user. And if we have a lot of information that's telling us these existing pages are really relevant because they've been doing really well for such a long time, then that's a strong signal for them. On the other hand, if we see, well, these existing pages haven't been updated in a long time and they're outdated, they're obsolete information, and there's a new website out here that's providing fantastic information talking about those themes in a way that others haven't before, then we might choose the new website instead. So these kind of things are really impossible to say, and it's impossible to kind of give it a time until a website will rank in the Google search results. I'm wondering if the current COVID-19 crisis will push back Google's plans for Mobile First Indexing for all sites from September to later this year or early next year. Good question. I'd love to have your all's input on that as well. So I've been in touch with the Mobile First Indexing team, and for the most part, they're still seeing sites kind of preparing and getting ready for Mobile First Indexing. So it feels like people are still working on their website, even if they're maybe not working in the office anymore. On the other hand, the whole situation is really hard to take and really throws a lot of companies off, throws a lot of websites off, and people are working on websites. It's very challenging. So that's something that might say, well, we're seeing people are struggling with this, and we don't want to create an extra burden of saying, well, you have to do all of this extra work until September. Maybe we should push it back. But it would be really useful for me to get some feedback from any of you on Twitter or where else to kind of let us know, is this something you're seeing people struggle with, or do you think this is something that people are planning to do regardless? Let me know. I'm happy to pass that on to the team and to figure something else out. My guess is we will make a decision on this maybe sometime in the next month, and then either update a blog post or tweet something with the official account with regards to the actual date. I feel like Google sold me a bad domain and is doing nothing about it. I bought a domain from Google in March a few days after having my old site without penalties. redirected to a new domain is when I found out about a pure spam manual action. Two reconsideration requests have been denied. My site has completely new content and is free of spam. What are my options besides asking the community because even they don't know? So that's really hard to say. I mean, from our point of view, the whole search ranking side of things is not something that the Google Domains team works on. So they would not know which domains had a manual action or not. In general, whenever you're buying a new domain name and you want to move from one domain to another, it's worthwhile to do some research and to figure out what happened with the domain name that you're considering. And to really look into the details of what was hosted there in the past, is something that I can deal with? Do I have to deal with anything or not? Is this something where maybe there are a whole bunch of crazy links pointing at the domain name and it was used by, I don't know, spammers hosting all kinds of crazy content? That's something where if you move to that domain, then you have to live with that and work on ways to improve that situation. So that's something where you really need to do some research ahead of time before moving a website to another domain, especially if it's something that was used in the past. With regards to this specific case, it's impossible for me to say because I don't know the websites involved. But generally speaking, a pure spam manual action is based on the content. It's not based on a domain. It's not something that would be transferred from one owner to the next owner. It's really based on the content that you host there. So if you're seeing a pure spam manual action, then that's really something where you need to think about your content really hard and think about why would my content perhaps trigger this? And it's really hard to say because I don't know what the site is here. It's certainly possible that the web spam team gets one of these things wrong. And with a reconsideration request, that's a great way to get it in front of another person on the web spam team. But it feels kind of unlikely that you would have two reconsideration requests and the initial manual action happen if there's really nothing with regards to the actual content here that's problematic. So I mean, it's certainly possible that you have really bad luck. If you really feel that it's one of these cases where you really have fantastic content and the web spam team just doesn't realize it, then you feel ping me on Twitter so I can take a look and pass that on to the web spam team to double check. But it really feels kind of iffy just based on the question itself. I hope you're doing well despite all of this coronavirus crisis. The question I have is the following. We recently transferred our website, example.com, to a multilingual one. So for most language versions, we have the ordinary setup, such as example.fr, example.nl, example.es. But for the Swedish extension, our International Sales Director requested to keep the original domain of the acquired company. So instead of having the ordinary example.se, we have random.es. I was totally against the setup because I didn't believe Google will see this domain extension as a primary domain Swedish extension, but will treat it as a separate domain. Please clarify from Google's perspective how this is going to be seen by Google. OK, so in general, these are all separate domains. We would not infer from just the example part in the beginning that these domains belong somehow together. Essentially, these are completely separate websites from our point of view. However, for most multilingual setups or international setups, you would use something like the hreflang annotations between maybe the home page, maybe between other pages on the website. And for hreflang, it doesn't matter which domains you use. So if you have example.fr and random.se, you can have the hreflang annotation between those two. And that tells us all we need to know that this page on random.se has an equivalent in French on example.fr. So from that point of view, it doesn't really matter with regards to which of these domain names you use for which country version. For the country versions, it sounds like you're using country code top level domains, which is great. It means geotargeting is already automatically handled. If you just want international language versions, then you don't really need to use country code top level domains, but if you want to, that's all for fun. So from my point of view, I wouldn't see this as any kind of a deal breaker or any critical issue if you don't have example.se and instead have random.se. A question about multilingual website, if we have a proxy buying website that helps users from different countries buy clothes from a Chinese auction website, we use an API to get the product information present in our website in English and German, et cetera, while we manually translate information about the product categories and provide UIs in each language, there's no realistic way for us to manually translate the product names and descriptions. If we leave them in Chinese, that's not a good user experience and also does nothing for English SEO, providing automatic translations would greatly help our users. However, it looks like automatically translated content is against the webmasters to guidelines. Do we have to leave the product in Chinese to avoid a penalty? So that's an interesting question. I think there are probably lots of things that could apply here as well, but in general, I would see the automatic translation as something where in many cases, it's just the quality of the translation is really bad, and that's one of the primary reasons why we say this is not something that you should be doing. It might be something that changes over time, where over time, if we see that these automatic translations set up are able to create content that's of really high quality and really easily understandable, then maybe you can do that. Maybe that's something that would make sense there. But overall, we often see that these automatic translations are just borderline gibberish even, where it's really hard to tell what those products, in this case, would actually be about. So that's kind of the primary consideration I would have here, is if you're taking content and you're just automatically translating it into a number of different languages, and the content that you get out of this is of such low quality, then keep in mind that we will look at those pages and say, well, this is low quality English content. It's low quality German content. Why should we show it at all in our search results? So it's not so much about whether or not you use automatic translations or human translations. It's just, well, the content is really bad and hard to understand. Another thing that sometimes plays a role, I don't think if you have this many products so much, but is that a lot of times it's not just a matter of translating a product name or product description, but actually you have to localize it and write in the way that users would expect and categorize and group your content in a way that users would expect for those languages. And that can sometimes be subtly different across different languages, different cultures, those kind of things. So just going back to your question here, should you post it in Chinese or you post it in English? I think the general configuration is something you might want to reconsider. I don't think it's something where the manual web spam team will take a look at this and say, oh, this is terrible. We should remove the whole website. But more kind of from a quality point of view, our teams might look at it and say, well, we should not be ranking this website because it's really not that useful of content. For the new image license metadata for Google Images, which can result in a badge in Google Images, I know some large photo sites are adding necessary structured data and license pages. But some are adding license pages per image. And that's resulting in the addition of tens of millions of new URLs to the site, basically doubling the number of pages. For example, a site with 30 million images might be adding 30 million license pages. For some of those sites, the URLs were both crawlable and indexable. So those sites are adding double the amount of URLs that can be indexed. Is Google's recommendation to disallow the license URLs and robots text through the license URLs need to be crawled by Google in order for the license badge in Google Images to show up? I can see potential problems with letting Google crawl and index tens of millions of additional URLs. So the additional URL here, I had to look this up. I saw this. I think the same question was submitted last week as well. The additional URL is that on a per image basis, you can specify metadata for licenseable images. And per image, you can specify where the user can buy or license this image. So essentially, it's a link to a checkout page, if you will. From that point of view, like other checkout pages, you don't necessarily need to have those indexed. So if you use a noindex on those pages, if you disallow crawling of those checkout pages then that's perfectly fine. That's totally up to you. So from that point of view, you can definitely block those from being indexed. It might make sense to find other ways to deal with that. For example, if you have the image landing page, you could just have a checkout button there. And then people will go to the same image landing page and be able to do the checkout flow there. But ultimately, you have to work with whatever restrictions you have in your CMS. And sometimes just creating a separate page for the checkout is the easiest approach. So that's totally up to you. In Search Console, it indicates there are no security issues. So it's not possible to request a review. But if you search for our company name, the this site maybe hacked message still comes up. How can it be removed if Google Search Console believes everything is OK, I guess. So I did a quick search for this company name. And the search results it shows for your home page is all about pharmaceuticals. So my guess is your website is still hacked. Clicking or looking at the website directly, I see it shows normal content. So this seems like a traditional kind of a hack where, essentially, the webmaster V, the hacker, is cloaking to Google and probably sending users to a different page when they click on your search result. So that's something where you probably still need to resolve that issue to make sure that we can understand that your page is about, I don't know, I think it was like silverware or something like that, and not about pharmaceuticals. So the best way to do that is probably to double check in our help center. We have a whole section on dealing with hacked websites. I'm not sure if it's in the help center or in the developer documentation, but one of those two places. And it lists essentially the same kind of hack that you're seeing there, where when you go to the page yourself, you see the normal content. In the search results, you see the hack content. And it has kind of a step-by-step guide on how to deal with that. What you can do once you've resolved that issue, and if you still don't see any kind of the security issues being flagged in Search Console, you can just use the Inspect URL tool and resubmit the homepage for crawling and re-indexing. And we'll pick that up usually within an hour or so. So that's kind of the direction I would head there. The tricky part with this kind of hack is it essentially means that the hacker has access to your server or had access to your server, which means that you also need to double check all of the verification settings in Search Console. So on the one hand, your verification tokens. On the other hand, any other verification tokens that might be on that website. Because we've frequently seen that hackers will take over a website. They'll verify the site in Search Console or kind of like re-verify it with their own account. And then they'll use their own account and say, oh, my website got hacked. And like Google, can you get rid of this warning because I fixed the hack and hope that someone from the web spam team removes kind of the, this site is hacked label, even though it's still the hacker who's asking for this to be done. So that's something where you really need to double check all of the verifications on your website and make sure that only you are the one who's actually a verified owner. And then, of course, by following the guidelines or the guide in the Help Center, make sure that your server is locked down appropriately so that the existing hacker or any other hacker won't be able to get in again. So I think it's sometimes a really frustrating process, but it's really important to follow through with those steps. And if you have any difficulties along the way with that process, I would strongly recommend getting help from someone who has a little bit more experience in this because they'll be able to see kind of those places where hackers traditionally hide their code and help you to really clean things out completely and to get it as resolved as quickly as possible. OK. To add the two-step authentication. Two-step authentication. Yeah, I think that always makes sense for your accounts. But if someone is hacking your server, then the authentication on your own account is kind of, yeah, not that useful. But it is something where it's, I don't know, the first time you get your site hacked like that, it's so frustrating. And it's really hard to understand what you should be doing. So it is definitely worthwhile getting help. Maybe go to the Webmaster Help Forum if you're really unsure about things. There are people that are active in the Webmaster Help Forum who have seen a ton of hacks and who can help you to find some things. But probably, if this is a company website, you probably want someone that you can interact with one to one and just get it resolved as quickly as possible and understand exactly what happened so that you can make sure to avoid that in the future. All right, let's see if anything else popped in. Otherwise, yeah, I think. Oops, someone was waiting for the link. Let me just copy that. That was 30 minutes ago. Oops, sorry. Yeah. John, I have a quick question. Sure, go for it. Is the special announcement COVID Schema live yet? I don't know if it's live yet. I think it's, as far as I know, it's still in the rollout phase. I don't think it's live in general yet. OK, thank you. John, I have two questions. I haven't been here in almost half a year. I don't know how long it's been, but somewhere around there. Quick question, the hashtag symbol, adding it before the actual, like in the title type, does that affect the home page? That means the actual title, like if adding the hashtag prior to the first word, does that affect the main keyword for the actual home page? So for instance, if I'm selling toys, and I have the hashtag and then pipe and then the actual word and then so on and so on. Because I'm still having issues with that because I'm having kind of an argument with someone on that. I believe. Honestly, I don't know. So my feeling is we would probably understand to ignore the hashtag symbol and just focus on the word, which is probably what you're trying to do. But I don't know if we would use that combination of symbol plus word as something unique. So you ignore it? My guess is we would just ignore the hashtag symbol and focus on the word itself. OK. But that's something that you can probably test by including some random word with the hashtag and seeing if you can search for it with just the random word without the hashtag symbol. It's hard when someone believes that the hashtag, you know how it is. Yeah. I mean, it's not going to make your site rank for that keyword anyway automatically. So just using like hashtag toys or, I don't know, like hashtag, cheap credit cards in a title, I don't see that making any difference. It just looks kind of weird in the search results. OK. And then also you said something a while back on Twitter. And I know a lot of people quote that also here on the Hangout. But you said that guest posts placing a link back to your site from a guest post is unnatural. But what if a person does that on a massive basis? Like, do you just ignore that? Because I think Gary said in this other tweet. Usually you would just ignore that. So that's the kind of thing where if we can tell that this is clearly an unnatural link, we would essentially just try to ignore that. So what if the guy has like 40,000, 50,000, you know, you would just ignore that? Yeah. It sounds like they've been busy. But it's easier to ignore a lot of links than to ignore individual ones. Because if our algorithms are kind of with that mindset of like, we don't know if we can trust this link. And if we can see that there are lots of these links out there that are essentially following the same pattern, it's a lot easier for us to just say, oh, all of these are the same bucket, we should just ignore those. OK. Thanks. I appreciate it. Sure. Can I ask a quick question about the In Search Console international targeting and the geolocation there for country? And the HREF land you can specify reasons. Do they do basically the same job? Are they a matter of this to each other? Or are they a little bit different in scope? Geo targeting and HREF land are essentially different things, but people tend to use them for related things. So HREF land essentially helps us to swap out the URL if we rank one of the URLs of that set. And geotargeting helps us to understand that this is probably a local page. So if someone is looking for something local, we would try to show them. So it's like the end result can look kind of similar, but they're technically different things. I think it's pretty confusing to have this slight overlap there. I'd love to have some easier setup for all of the international things, but all of the proposals I've put together ended up being much more complicated than what we have now. So it's like, I don't know if someone has a really good idea for how to let search engines know about international versions of a website. That would be fantastic. But it's been challenging. Can you imagine? Thank you. All right. More questions from any of you? Hi, Mr. Malo. Hi. This is me, Wahan, from Search Engine Journal. I'm chasing you with the discover question of large engines. On last call, you said you were looking to the issue why non-amp pages pull small images. Would like to kindly check if you did get the chance to look. We've been looking into that. And there's some work being done from the team to make, I think, the information a little bit clearer and to be a little bit more consistent internally with how we deal with those kind of issues. So I don't have a complete update for you yet, but people are working on it. So hopefully, we'll have something soon. Thank you. Sure. Wow, we're running out of questions. I can't believe it. Let me see. There's one in the chat. We have separate pages for our shop, where there are only two things in that page. One is the shop info and two ratings and reviews. And our users usually come for the ratings and reviews. So I want to display the ratings in the search results prominently. Yeah, so I think if on the product pages you have reviews and ratings, then that's a great use of the structured data for those individual reviews and ratings. I would just make sure that you're following all of the guidelines there, because lots of people like to use the review structure data. And that's something where our systems and our engineers are kind of more and more picky with regards to what we actually show in the search results. So make sure you're following the guidelines as completely as possible so that it's clear that what you're providing on your page is actually what makes sense to show in the search results. Does the date in front of a meta description affect the ranking if the blog post is from previous years? No, not necessarily. So a lot of times when we recognize a date on a page, we can show it in the search results directly. And that's not because it's in the meta description, but we show it next to the snippet, which is usually what we take from the meta description. And we display there. But that's not necessarily a ranking factor, because it's not always clear what a user is looking for. Are they looking for the newest information on this topic? Or are they looking for research information on this topic, which might be a little bit older? So just because there's a date being shown there, that doesn't necessarily mean that that's kind of a positive or negative ranking factor in any way. Are you seeing any potential shift impact of COVID-19 on the proportion of mobile tablet desktop searches? I have not seen anything there. I don't know if we'd have any data on those changes at the moment. I have no idea. It seems like an interesting question. If you have a website that gets a lot of impressions, you can probably double check that yourself in Search Console and see if there are any changes there. It'd be interesting to look at across a number of different websites, but I don't know if that would be something that we would publish or if it's easier for you to kind of work together with a bunch of other publishers and combine that data and look at it from your side. All right. Any last questions before we head out? Martin, no? Oh, I'm just I just have to have to leave. Bye-bye. Just waving. OK. Just waving. All right. Fine. So the next hangout is lined up on Friday. There's a German one lined up on Thursday. If you want to practice your German. So if there's anything that we missed, feel free to jump into one of those hangouts. Thank you all for dropping by. And I hope you all stay safe and healthy and at home, I guess, as much as possible. And see you next time. Bye, everyone. Thank you, guys. Bye. Goodbye.