 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these office hour hangouts with webmasters, publishers, SEOs, anyone who's making a website and trying to get it into Google Search. As always, if any of you want to get started with the question, feel free to jump on in now. There's anything burning on your mind. Otherwise, I'll just run through some of the submitted questions. And if you have any questions or comments in between, feel free to jump on in. And we should also have time towards end for more questions from you all if anything is left that we need to cover as well. Hi, John. John, I'll ask a question if you can go. Is there a way other than you personally looking to tell whether a new domain, another domain we're thinking of starting on, has any inherent problems like the existing one? Not really. So I mean, is it impossible to have that twice? I mean, it kind of depends on what you're thinking about and what kind of domain it is. If it's a domain that was used in the past, you could look at things like archive.org to look at the data there. If it's a domain that you wonder about at the moment, then if you have access to Search Console, if the existing owner can give you read-only access to that, then you can double check for manual actions there. But you can't really see if there's anything algorithmically associated with a domain. And as far as I know, you can't see what manual actions used to exist on a domain if you just have read-only access because you only see the current status. It's one of our domains. It's one of the ones we've had for a few years. It's just that because of all of the previous issues on interlinking and whatever, I just wondered if there's a way to check or if it's something I can send to you and you can say, no, you're fine. Good to go. I can double check to see if there is anything obvious. I don't know if I would be able to see everything, especially if the site isn't being used and can't really see. It's not. No, it's just I mean, it has been in the past. But you know, we're a little bit special. So. Yeah. Sure. OK. I'll leave it to more specific questions to other people. All right. There was someone else who had a question as well. Yeah. Hi, John. Hi. Hi. So we have a client who are into order cake online sort of niche. And they have this aggregator sort of thing where they list the bakery for XYZ location. And they sell that bakery in three or four more locations in the city. So we are creating similar content for each page. And we are just trying to rank that thing in four different locations. So just wanted to know, does that affect sort of create duplicate content for the user? Or would that affect the SEO ranking going forward or something like that? If it does, can it be managed through some other way if you want to? Yeah. I mean, ideally, if these are separate businesses, I would really make sure that they have unique content on their web pages. So I think that's kind of the first thing that I would do there. If this is the same business in different locations, I would consider just making one page, listing all of the locations and opening hours. And that way you have one page that can be fairly strong where you don't kind of dilute it by splitting it up into separate pages. So that's kind of the balance I would try to find. On the one hand, really having something unique and compelling for business, for example, if it's really something separate, or if you can kind of group things together and make one stronger page rather than four or 10 or 100 individual smaller pages, then I would prefer to have one strong page rather than all of these kind of individual versions. So that's kind of the split there. And with regards to unique content, it's not really a matter of, are these words exactly the same? But really that it's unique and something special for each of those businesses that you have there, not just that you change the order of the words or you have a small typo in one of the words. And it's not 100% the same, but it's very similar. So it should really be something unique and compelling. So you are saying we should have a separate page, but we should have a sort of unique content for each bakery. And we should try to rank them in multiple locations, like one single page in multiple locations, right? I think that depends on you. So I would not say sort of unique. It should really be unique. If you really want to make separate pages, it should really be something that stands on its own. All of those. 100% unique. Yeah. OK. Sounds good. Thank you very much. Sure. All right. Let me run through some of the submitted questions. And as always, if you have questions in between, feel free to jump on it. We have a lot of URLs requested to be removed from Google because of DMCA. Around 80% of the requested pages were removed. Could this be a reason for an algorithmic penalty of the whole website? I don't know with regards to algorithms there in general, but if 80% of your content is removed because of DMCA reasons, I could imagine that our quality algorithms might have trouble understanding the rest of the content, too. So this is not so much specific to DMCA, but if 80% of your content is known to be copied, then that 20%, I could imagine the algorithms are just like, well, I don't know if this is really that awesome either. So not specific to DMCA, but just in general, it sounds like something where you need to rethink what you want to do with that website in general. What URL structure does Google prefer for AMP pages with subdomains, subdirectory, or facets like a query parameter? Oh my gosh, always these subdomain or subdirectory questions. I always get in trouble for these. With AMP pages, you can use whatever method works best for you. Some people use a separate subdomain because they have the infrastructure separate. Some people use a separate subdirectory because maybe a plugin or whatever they use for AMP pages works like that. Some use a query parameter. That's an option too. So all of these are essentially equivalent for us. The difference is mostly with regards to your side, your tracking, those kind of things. Will Google follow a canonical tag in page one? I guess if the canonical tag goes from the different pages to page one of a list, or do they all need that same thing? In general, for canonicalization, we need to understand that these pages are equivalent. One way you can signal that to us is with the rel canonical. But we also look at the content itself. So if we look at the content and we see something completely different on one page than on the other page, and you have a rel canonical between those two pages, then that's a bit of a tricky situation for us because you're saying these pages are equivalent, but we look at them and we see they're clearly not equivalent. So what should it be? Should it be equivalent? Should we fold them together? Or should we say that these are separate pages that need to be indexed separately? And that's something that you can look at in different ways. So especially with paginated series, I'd prefer to do something like rel next, rel previous, or just to say, well, I'll allow the first couple of pages to be indexed and all of the rest I'll have is no index, no follow. That's a valid option as well. We've seen a number of low-quality sites scrape our data and use it in combination with spun text to pass it off as their own. They spin up a number of different websites. Some of them help rank us with no authority. Is it possible that Google is being fooled? So I guess in general, it's always possible that Google is being fooled. That's like I think there are always ways for some people to creatively get around our algorithms. We, of course, try to avoid those kind of problems. But going back to this specific problem, I tend to see this a lot with sites that are overall fairly low quality as well. So really high quality, good sites, even if there are scrapers out there that are copying your content that are slightly spinning it, for the most part, they don't seem to have that many problems with that because they have a really strong site themselves. Whereas if your website is also kind of on the lower part of the quality scale, then low-quality content here, low-quality content here, it's really hard for algorithms sometimes to figure out which ones of these we should be showing in the search results for this. So that's something kind of to think about as well. It might be the case that your website is fantastic, and we're just messing things up with our algorithms. If you feel that that's the case, I would strongly recommend going to the Webmaster Help Forms and posting there, ideally with some fairly generic queries that are leading to your pages, where it's really clear to see how essentially copied content is ranking instead of your content. So that's kind of what I would look at there. Is an IP address important for local SEO? If yes, what's the IP, the role of the IP in SEO? So way in the beginning when we didn't have a lot of information about geotargeting, the IP address of the server was a really useful signal for us, because we could see that if a server is located in a specific country, we could kind of assume that probably they're targeting users in that country. And that could be something that we could pick up. Nowadays, we have lots of different ways to get information about geotargeting for individual websites, such as a country code top level domain if you have one, or a generic top level domain plus a search console setting. Those are kind of the two main ways to get information about geotargeting. You can also use hreflang information to tell us about the language variant that you have on your pages. It's not quite the same as geotargeting, but it can do slightly similar things. And all of those methods, they tell us a lot more than where the location of the server currently is. And especially with globalization, with content delivery networks, the IP address of a website is not necessarily there where the users are mostly located. So that's something we tend not to use when it comes to geotargeting at all. With regards to local SEO, local SEO is often kind of used as a way of describing things that are kind of like on a city level where it's like really hyper-local targeting that you're trying to do. And for that, it's a lot more important that we understand where your business is located. So that could be an address on your page. That could be Google My Business listing. Both of those help us quite a bit there. The location of the server itself, again, doesn't play a role here. So you don't need to put your server in the city that you're trying to target users for. What do you consider a link? Is an image from another website using an IMG tag to link to a graphic? Also consider a link. If so, does Google support no following of images or iframes? So for us, a link is really an A element in HTML, so an anchor, I believe they're called, with an href attribute that has a URL attached to it. So with that, it's a clear sign to us that this is a link that you can click on it. It goes to a specific location. We see the URL where it goes. We can follow that link. Depending on the situation, we can pass signals through that link to say, well, this website is vouching for the other one or recommending the other one. Therefore, we will use that to forward some information. Or if it has a real no follow attached to the link, then we won't pass any information, any signals through that link. An image itself is not a link. You can use an image as an anchor form. That's often done, where you click on an image and it goes somewhere else. That's done by also having a normal A element around the image tag. So if you have that set up with a normal A element around an image tag, then, of course, that's a link again. But if it's just an image that's pulling an image from another website, then we would not consider that a link. So there's no way to no follow these kind of images because we don't forward any information. It's not a link that we would use for a normal web search. If you use no index tag on an article and you have an equivalent AMP page for that article, do you also have to set the AMP page to no index? Or can you keep it kind of as indexed? Do you still have to have the link rel AMP HTML on the site on the desktop? Or could you also delete that link? So with the two common setups that you have for AMP pages, it's slightly different. So the one setup that's really probably the most common is you have a normal traditional website, and then you have individual AMP pages, and you connect the two with the link rel AMP HTML and the link rel canonical back. And in a case like that, if the traditional page is set to no index, then we will drop that link to the AMP page, and we won't follow that. We won't index the AMP page either. Similarly, if the AMP page itself has a no index, then we'll also drop that AMP page, and we'll just keep that traditional web page in our index. So in that setup, if we see the traditional web page has a no index, and we look at the AMP page and it says the canonical is this traditional web page that I have, then that essentially tells us this AMP page is not really a valid AMP page. And the other setup is where you have a canonical AMP, which is kind of a self-sustained AMP page where it has a link rel AMP HTML to itself, as well as a link rel canonical to itself. And in a case like that, we see the AMP page as something that stands on its own. So if individual pages within your website have a no index and we find that AMP page, then that AMP page can be indexed individually separately from the normal web page. So in that case, we would just index the AMP page as a normal page, as we would show in Search Genome. Personally, what I would recommend doing here is if you want to remove an individual article, I would try to remove it cleanly so that you don't have this kind of broken link to an AMP page for a page that you don't actually want to have indexed. Because just mostly because of maintenance reasons, because someone else will be looking at this from your website, from your business, and kind of wondering, what did this webmaster mean when they set up this one page with no index, but it actually has an AMP page that looks like it might want to be indexed. So that's kind of the conflicting and confusing part that I would try to clean up mostly for maintenance reasons on your side, not necessarily for Google. When a site receives a manual action penalty and is subsequently removed, how long does it take before the impact of the penalty is gone? Additionally, the recovery times differ based on the cause of the manual action. So I think that there are two slightly different things that might be included here. On the one hand, we only remove pages completely from search with a manual action if there's really something problematic on those pages. And in cases like that, that's something where it's really a strong sign across the board where we look at this website and we think there's really very little value in us investing any resources in indexing content from this website because it looks like it's a complete copy from somewhere else, or it's just scraping or spun content, then we'll essentially remove it completely from search. And when that happens, on the one hand, we stop showing it in the search results, which is kind of the visible part. And on the other hand, we stop indexing it in search. So essentially over time, we don't have this website in our index at all anymore. So if that manual action is removed, so for instance, if you take over a domain that has a manual action like this and you clean it up or you start fresh with new content and you submit a reconsideration request, then what will happen is first we'll have to start indexing this website again. And that can take a bit of time. So that can easily take a couple of weeks for us to start picking up content from this website and saying, oh, this is actually good content. We'll start indexing this again. We'll start showing it in the search results. That's something that can take quite a bit of time. On the other hand, if this is a manual action that doesn't result in the page or the site being removed completely from search, then usually that's something that's just a matter of us recrawling, reprocessing those pages so that we understand that things are OK now. We can rank it normally after the reconsideration request has been kind of processed properly. So usually those are the types of changes that are a little bit faster to be seen than the changes where we remove a website completely from search. But for both of these situations, you can clean that up. You can essentially recover completely. It's not the case that our algorithms would hold a grudge or that they would say, well, this website had a manual action a year ago. Therefore, I'm never going to trust it again. When these manual actions are cleaned up, we will treat the website as it is. I have code that is blocking my home page from rendering, and I'm at a loss on how to remove or defer. So this sounds like something that's pretty technical if there is a specific HTML or JavaScript on your pages that's blocking Google from rendering page. So what I would do there is go to the Webmaster Help form and get some tips from other people who could take a look at this and say, well, this is really a problem, or this just looks like a problem maybe in that error report that you're looking at, but actually everything is fine. And more often than not, there will also be some tips with regards to what you could do to improve that situation. So this is something that could have a variety of different reasons that could result in a page not rendering properly. So I'd really try to get some help from some other experts who've run into similar issues in the past. Question about keyword-based domains. For example, bychairsonline.com. In this case, is it a positive sign to rank these keywords, or will it take time like any other new website? Yes, it takes time like any other new website. So just because keywords are in a domain name doesn't mean that it will automatically rank for those keywords. That's something that's been the case, I think, for a really, really long time. Obviously, there are lots of websites out there that do rank for the keywords in the domain, but they've worked on this maybe for years and years now. And maybe they've had that website for a really long time. So it's kind of normal that they would rank for those keywords, and that they happen to have them in their domain name is kind of unrelated to their current ranking. Does robots text need to be indexed in Google? No. Robots text is a file that controls how search engines can crawl content from a website. And this is something that needs to be machine readable. So it needs to be read by machine, but it doesn't need to be indexed in Google search results. So you don't need to search for it in the search results for your site. Sometimes we do index these pages because we see links to these from random sites, but it's not something that needs to happen. If it does happen to get indexed for the most part, it generally shouldn't rank for any of the normal words on your website. So probably you can only pull it up if you're explicitly looking for that robots text file. If you want to take control of that, even in that situation, you can use an xrobots tag, HTTP header, to tell us not to index the robots text file. But for the most part, you don't need to do anything special there. Our site looks partly incomplete and destroyed in the Google cache because of missing access control headers in our CSS and JavaScript requests. Does this have any negative impact? No, not really. So if you're sure that your pages can render properly for Googlebot using the mobile-friendly test or using the fetch and render tool in Search Console, then that's what's important. The Google cache view is something that can be broken in weird ways like this, especially if you use a lot of JavaScript on the pages, depending on how you set up your CSS, how you set up other embedded content. That's completely normal. We show the static HTML version in the Google cache view. So if from that static HTML version the embedded content isn't working, then that can happen that the cache page doesn't look that perfect in the search results. But if we can render the content, we can index the content, and that's what counts. We found several domains copying our content one to one, changing our internal links to porn keywords. Would you recommend to put those links into the disavow file, other recommendations on what to do? So if they're copying your content one to one, you might want to look into the DMCA process. That's something that could help you in a situation like this. It's a legal process, and I can't give you legal advice. So you'll need to check with whoever you get legal advice from to see if that would work for your situation. You can also submit a spam report to us if this is something that you think the web spam team might want to look into. In general, I don't see these kind of links causing any problems for any normal website. So I wouldn't lose any sleep over it. If you feel that maybe Google is not ignoring this properly, then you can put them in disavow file. That definitely doesn't hurt. But for the most part, I don't think you need to do anything, especially. Why doesn't Google recognize my site as a mobile version? I have another site with the same structure, and it recognizes mobile by Google. I assume this is with regards to the mobile-friendly label that we show in the search results. So I guess we don't show it in the search results anymore, except if you're the owner and you search for your website, then we will show your site isn't currently not seen as being mobile-friendly. And there are a variety of things that might be happening there. So what I would do is, first of all, take the mobile-friendly test. That's a testing tool that you can use to plug your site in to see how it shows up there. Then what I would also do is to use fetch and render in Search Console to look at the mobile version there. What I've sometimes seen is that embedded scripts, JavaScript, or CSS files, that they sometimes get blocked by robots.txt or that we, for whatever reason, can't crawl them properly at one point or another. And if we can't crawl the embedded scripts and we can't be sure that it's mobile-friendly, because we don't know what the script might be doing or the CSS file might be doing. So if those files are blocked by robots.txt, for example, then that would be kind of normal for us to say, well, we're not sure if this is actually mobile-friendly or not. So that's something that's probably pretty easy for you to resolve. Again, I would take a look at those testing tools and see if you can find anything conclusive there. If not, it might also be worth just posting in the Webmaster Help Forum with some more details of what exactly you're seeing so that someone can take a look and see, is this really a problem? Are you just perhaps interpreting a report incorrectly, or is there maybe some data on Google's side that's wrong that needs to be reported to us? So all of these might be options. How to improve my crawl rate. So that's a fairly broad topic. There are lots of things that could be done to improve the crawl rate of a website. So in general, there are two or maybe three aspects that play the largest role here. On the one hand, we limit our crawl rate based on what we think your server and your network infrastructure can take. So that's kind of our upper limit, where we see, well, if we crawl a little bit more than this, suddenly your server gets lower, or suddenly your server starts serving those errors. That's something that I think makes sense for us to kind of stay below that limit, because we definitely don't want to cause any problems for the website through crawling. That's kind of an upper limit. The other thing is in Search Console, you can further give us more information about a limit that you want to set as well. So that's another kind of upper limit for the crawl rate that we have. And on the other hand, that is essentially our crawl demand. So what we think we need to crawl from your website in order to keep up with changes that you have on your website. And that's something that's more on our side, where we look at your website, at your content, and think, oh, we need to crawl more. We need to make sure that we pick up more content from this website, that we can crawl and get all of the new changes for this website, because it's really a fantastic website. And we really, really want to make sure that everything that's new that happens here is in front of our users as quickly as possible so that they can go to your website when they're looking for something new. So that kind of has different aspects there. On the one hand, having a really good website is something that's important. If there's content that changes frequently, that helps us to kind of pick that up. The other side of that, though, to also keep in mind is that crawl rate alone is not a sign that we will rank your website higher. So just because we're crawling a lot of content from your website doesn't necessarily mean that we would rank your website higher. Crawling is not the same as ranking. So unless you're actually missing new and important content in the index from your website, then I would not really worry about the crawl rate. That's something that we kind of handle on our own automatically. That's not something that you really need to worry about. But if you are seeing that new and really important content from your website is not being picked up, then I double check those different factors. From a technical point of view, can we actually crawl that much, or do you have maybe a limit set in Search Console? What about social with regards to that crawl budget? Social. We essentially don't use any kind of social signals for Search. So just because a URL is shared on Twitter or somewhere else, for the most part, we can't always crawl all of those shares. And for the most part, I believe all of them are with no follow now anyway. So we don't pick up any information there. Even if you're driving a lot of your current up-to-date best content is being shown on social media and you're driving traffic to your site from social, you would think that that would kind of. I don't think we would pick that up at all. I think in a case like that, we would try to pick up the indirect signals more, but we actually see people recommending that content in a way that we can follow those links and pass signals through those links. OK, thank you. Also, a couple of questions in the chat, too. Oh my gosh, I just noticed. Yeah, OK. A question. Let's see. If I create on the mini social networks the same brand, do I get a better ranking? No. So you can create social profiles and make different locations and publish content there. We can crawl and index that content for a large part if it's public, and we can show it in the search results. But just because you have a lot of social profiles doesn't mean that your website is in any way more relevant than otherwise. So the number of social profiles don't really matter. In the legal profession, there are a lot of directories to submit to. Some are free, some are paid. Are these frowned upon as a way of getting backlinks? For the most part, when it comes to directories where you submit the content yourself, we ignore those links. We want links to be natural in the sense that we can tell that someone is recommending a website naturally. We don't want links to be such that we can look at that and say, oh, well, the webmaster put this link here because that's not really a recommendation. That's just the webmaster saying, yeah, my website is great. So that's, for the most part, we skip those. If the Google cache view of a page shown in the search result and the Google cache shows the content and URL of a completely different URL, does that mean that the first URL issues a 302 redirect to the second? Or could there be something else that causes this? So in many cases, when you look up the cache view of a page and you see a different page or different URL, that means we chose a different URL as a canonical for that page. And if this is your website, you can double-check this with the new inspect URL tool. So you can plug your URL into there, and it'll tell you this page is canonical or this page is not canonical. And here's the canonical that we chose for this page. And more often than not, that canonical that we chose matches the URL that we show in the cache view. So in many cases, the URL shown in the cache matches the canonical of the page. And picking a canonical is something that we use multiple factors for. So we use things like redirects, internal links, sitemaps, hreflang, external links, relcononical, of course. All of these things help us to get a sense of which URL should be the canonical for this page. And they all kind of add up. And if everything goes in the same direction, we'll pretty much trust that. Whereas if things are kind of in different directions, then we have to figure that out on our own. And that might result in a situation like this where we see, well, there's kind of similar content here. All of the links are going to this page here. And the relcononical is kind of up in the air. Then maybe we'll pick this one as a canonical. It might also be that next time we reprocess it, we'll pick a different URL as a canonical. So I kind of take the cache page URL as an estimate for the canonical that we chose and use the inspect URL tool as a way of determining what the actual canonical was. And then taking that and trying to work your way back to think, why did Google pick this particular URL as canonical? And if I feel that's a bad choice, what can I do to tell Google Clear that I want a different URL as a canonical? If a site is changing domain names and the current site was not using HSTS, is it OK if the site owners use HSTS with a new domain? Is there any downside to making the move during the domain name change? Would you recommend holding off on HSTS with the new domain until things settle down? So for the most part, we don't use HSTS as a ranking signal, so not for the most part. We totally don't use it as a ranking signal in search. HSTS is a way of telling users that they can go to the HTTPS version directly without having to go through that redirect. So you still need to have that redirect in place, but essentially it tells users that they can go to HTTPS directly without even checking the HTTP version if someone clicks on a link that goes to maybe the HTTP version or enters the URL. So I would personally see HSTS as something that you add on top at the end when you're really sure that your whole setup is working as it should. In the case of a site move, when you're sure that everything has moved properly, that you don't have any kind of issues with non-HTPS content, that your redirects are all working properly, that your certificates are all aligned, and all of the setup with your content delivery network, if you have one, is set up properly, then at that point, when you're really sure that everything is working well with HTTPS, then set up HSTS to kind of cement that configuration. Whereas if you set up HSTS way in the beginning and you make some kind of a mistake along the way, then suddenly that could cause problems, and the users go to the HTTPS version of a site when maybe that version isn't completely ready yet. OK, there is a question about, a long question, about the site hopdrifts.com. Let me just double check. So I think what happened here is you have your website, which you're hosting under your domain name, and then the IP addresses of your servers, where you also have the same content, and you serve that in a way that would respond to any request for that IP address. And it sounds like what happened is some of the content got indexed with the IP addresses. So in general, the setup that we recommend is to make sure that you check the host name that's requested and only serve that content when you're surveying through an IP address. That would essentially prevent this right at the root. Using a rel canonical helps us as well. So if we accidentally stumble across an IP address that serves your content, the rel canonical tells us, this is the URL that you really want to have indexed. A redirect is also something that could be set up. So if all of these things could be done when you're hosting your website, to make sure that we don't accidentally index the IP address version. Now, I think the question goes on that from the hoster, they suddenly got different IP addresses and they can't control the old IP addresses anymore. So they can't make any changes or set up any redirects there. And in cases like that, there's really not much that you can do, because if you don't control those IP addresses, you can't change what is served there. So obviously, adding a rel canonical, adding a redirect is not possible anymore. For the most part, we will focus on your website and try to crawl and index it like that. So that's not something where I have really seen any sites have any significant problems with regards to indexing just because they also happen to have some content indexed with the IP address. I know that's something that people tend to test for, because it's obviously something that can be cleaned up fairly easily. So if you have control over your IP addresses, then set them up properly. But that wouldn't negatively affect the rest of your website. I think the other thing that's probably worth saying here is looking at your website, I'm worried that from a content point of view, it's not really as good as it can be. So instead of worrying too much about these different IP addresses and they have traces of your content still in the index somewhere, I would really focus on your website itself and make sure that from a content point of view, you're providing something that's unique and compelling and really of high quality. So in particular, if you're writing about different mobile apps, then make sure that you're writing about the real app and you point to the actual download link rather than just copying the APKs and hosting them on your site because that's essentially just lining your site up for trouble down the road with regards to copyright and all of the other hassles that you have there. So if you're spending time on your website, investing on making something really good, cleaning up kind of issues with IP addresses, then maybe it's also worth taking a step back and thinking about what can you do to make sure that your website remains something that you can build on for years and years rather than something that you have to burn down to the ground because suddenly it got removed completely from search because it's essentially just copying content from other sites. So that kind of be my recommendation there. And I know that's kind of hard to hear, but I think it's worth kind of looking at the bigger picture rather than getting stuck on small things like IP addresses happening to have content indexed. I was expecting to see Chrome browser highlight insecure sites from today. I don't know when the cutoff time will be. And I already kind of feel sorry for those insecure sites. Like, if they're already insecure and we flag them as not secure, they will feel bad. But if these are your sites, you can fix that. How long does it take for Google to display review stars in organic search results after applying for indexing? So structured data, in general, is something that we have different thresholds that we need to have fulfilled in order to start showing them in search. On the one hand, it needs to be technically correct. So testing with the testing tool is kind of the first thing. On the other hand, it needs to be kind of logically correct in that you're using the right type of structured data for the content that you have. For instance, if you have a blog post about bicycles and you use recipe markup on those pages, then probably that's the wrong thing to set up. Even if you technically implement that properly, that doesn't really make sense. And the third thing, which is something that is always tricky, is we need to be sure from a quality point of view that this website is something that we can trust, that we want to show in the search results with structured data. So that's kind of the trickier aspect there. The first two, you can kind of work out from a technical point of view and kind of from reading the documentation. The quality side is something that you really need to work on on your own. And there's no kind of measure that you can just look at and say, oh, I'm at 0.5. I need to go to 0.6. That doesn't work that way. With all the changes happening for mobile first, my clients who have been on page one organically for several years now are feeling cheated because of the paid ads being displayed before the map and the non-paid search results. So I think there are a few things kind of being mixed up here. On the one hand, mobile first indexing is just the technical way that we index content. It doesn't affect how we show things in the search results. So it doesn't affect ads on a page. It doesn't affect the kind of composition of the search results with regards to images or different kind of news elements on a page. None of that is affected with mobile first indexing. That's something that's completely separate. So the changes that we're doing with regards to mobile first indexing are really more of a technical nature in that we've seen that most users use smartphones to access the web. And we want to make sure that our search results reflect that, and therefore we're going to start using a smartphone, Googlebot, to crawl and index pages. And that's essentially a change that's happening with mobile first indexing. It doesn't affect the composition of the rest of the search results at all. And if you're seeing things in the general search results that you don't like, that you think are bad for the user, that are confusing, that you feel are not relevant for the individual queries, I'd strongly recommend that you give us feedback on. That's something that we take very seriously. So there's a feedback link on the bottom of all of the search results. You can give us feedback there. We have a web search help forum where you can give us more kind of verbose feedback with photos or screenshots of things that you're seeing. All of that is really useful for us. We pass that on to your team. When we see it, the team picks up the feedback directly from the search results too. And that's really important for us. So if you're seeing things that you're saying, oh, this doesn't make any sense, Google, how dare you show this element here. This is terrible. And I can't find what I'm looking for. Then tell us about it. Don't just keep it for yourself and post in these office hours hangouts. Really be explicit on those search results and let us know. John, I've got a quick question regarding hreflang, if that's OK? Sure. We've translated only 30 pages of our site like our most important content. What's the best way to handle the rest of the pages on the site, the plugin that we used? It changes every URL even if they're not translated. It's about the country code within the URL. What's the best way to handle them at the moment? We're no index in those pages, so they're not included. Should we stop and be crawled or? I think that's fine. So if you have it set up with the hreflang to a page that's a no index, then we just drop that page. It doesn't affect the rest of the hreflang. It's perfectly fine to have a small set of pages that are connected with hreflang and the rest just kind of standing on their own. Yeah, I mean, one of the issues is we've got the pages with a different URL because of the country codes in the URL, but the content is still English. The rest of the navigation is all translated, which is cool. So we've no indexed those pages. Should we stop Google crawling those pages because it's No, I think that's perfectly fine to keep like that. OK, thank you. Cool. All right, let me see. We have like 10 minutes and a bunch of questions. Let me see if I can run through some of these very quickly, and hopefully we'll have a bit more time for you as well. We have a multinational company, and we bother a lot of local domains, but we keep everything under our GTLD. What should we do with the local versions? So in general, what I'd recommend, in a case like this, is just redirecting them to your local version on your GTLD if you want to keep it like that, just having your content on one URL rather than on two URLs. So that's something that's often easily set up to deal with it like that. And if at some point later you decide to move to the local domain, then that's something you can still do, just redirect in the other direction. So that's kind of what I would recommend doing there. The option that you mentioned, using a meta refresh or adding a robot's meta tag to these pages, I would shy away from that because you're just adding unnecessary complexity. I would just redirect them. OK. We have a bunch of international pages. Our primary market is the US. So we have essentially EN, US, or EN, and X default, all pointing to the US version. Would the X default be enough? From our point of view, an X default would be enough if this is really the version that covers everything else. That's perfectly fine. Personally, I recommend still having the other versions on there, especially if you have different country versions otherwise, just so that it's easier for you to manage the maintenance of those settings so that you can clearly see, oh, the German version is here, the French version is here, and the English version is here. Instead of seeing German, French, and everything else is here, then you might be like, what does this everything else actually mean? So just make it easier for yourself. From our point of view, both of those options would work. I can't verify my website. So you type in the URL, and it goes to a different one instead. What I would recommend doing here is posting in the Webmaster Help form and really being explicit on what you're typing in and where you're typing that and which URL is actually your URL and which one might be something else so that someone there can take a look and see what is actually happening there. Is this a matter of search? Is this a technical issue on the hosting side? Where can you get help to kind of clean that up? And in the Help form, there are lots of people who have seen these type of issues over and over again, so they're pretty good at that. Can we implement hreflang where the content in the targeted countries are different but the language is the same? So for example, India and Singapore, yes, you can. So that's a really common use case of hreflang. We see that a lot in Europe, also between Latin America and Europe, that's really common. So perfectly doable. Is there any way to tell Google that a website has a new domain if a redirect is not possible? For example, if someone else took the domain because you forgot to renew it. If you don't have control over the domain and we don't see a redirect there, then we will not forward signals to your new domain. So we will essentially look at that situation and see, well, there's the old domain and there's a new domain here. And this new domain has some signals. The old domain has some signals. We'll treat those as separate URLs. So it's not possible to retroactively have Google transfer all of the signals from an old domain to a new one if you don't control the old one anymore. So that's always a bit of a hassle if you have to move domains because your domain expired and someone else picked it up. It's happened to a lot of people. I include myself there. I've had that happen to sites that I've worked on in the past as well. You kind of have to bite the bullet and start over with your new domain or balance how much is it worth to get the old domain back from whoever managed to pick it up. So it kind of sucks. Is there a correlation between stock levels on an e-commerce website and rankings? No. So we don't look to see how many of the items you have available and try to rank pages that have or shops that have more of this content available. That's not something that we would do. What we might do is pick up if there's a clear signal on a page that says this content is no longer available and treat that as a software for. Creation about subdomain, what is better, like a CDN on a different domain or a subdomain on your main domain? I think specifically with regards to images, both of these are options. What I would do if this is content that you want to provide is to make sure that it's on a domain that you control. So instead of, like if you're using some specific CDN and they offer hosting on their domain name, I would still try to host that content under your own domain name so that if at some point later on you decide, I actually want to try a different CDN provider, then you can still move that content. Whereas if you're hosting everything on their domain and you decide to try a different CDN provider, well, it's hosted on their domain. You can't set up redirects. It's kind of stuck there. All right. Wow, I think we kind of made it through. What else do we have here in the chat? Stuff is happening. Hi, I have a brother. He builds 500 websites with a new domain name and he allows people to create a guest posting with a backlink to their site. Does this violate the policy? And do you think this helps people to increase rankings? Yes, if you're guest posting with the intent of getting the links from a website, that's something that would essentially be the same as you dropping a link on someone else's site. So we would try to ignore those. That would be against the Webmaster Guide policy. John? Yes. I have a question from earlier in the Hangout. We were talking about people scraping content. And as a matter of fact, yes, they're a perfect example, a highly ranked site. Also as a search engine, not you guys, of course. But they ripped us off. I mean, completely quote by quote using ellipses that we did so that you wouldn't use a lot. It was very obvious readers noticed it as well. What do you do then when it's such a big site in? I don't know. I mean, there might be some legal things that you could do there. But I really can't give you legal advice. Yeah. And we also see that a bunch of user-generated sites. For some reason, it's the user-generated sites more than others. They scrape a lot. They change the word insider to source allegedly to supposedly. But basically, if you look at it, it's the same structure. And for some reason, we've noticed that these user-generated sites, which I can't imagine have better page rank. I don't see them being linked elsewhere, seem to pop up a lot. It's kind of shocking to us. And again, there's sort of no recourse. Really, we send them notes. We say, stop doing this. They say, hey, we're user-generated, which, by the way, actually does protect them legally in many ways. They can even take photos and say, it's user-generated. It's not copyright infringement. It's a very gray area. And they seem to get away with it. So in cases like that, I think what would be useful is to have examples where you're clearly seeing that for fairly generic queries, we're showing these higher than the content that should be ranking there, so where it's originally from, for example. I think that would help us to kind of help improve our algorithms in that regard. But for the most part, I would still try to get rid of it at the source. So if there's anything that you can do, kind of like say, well, hey, this is my content. I worked on this. You can't just take it and kind of like twist one word around and say, this is something new. I would try to go that route. I mean, we've tried to publicly shame them by doing screen grabs. That helps a little bit. Lastly, one thing about the DMCAs on YouTube, just a suggestion. What we've been noticing also is a lot of their, I think it's like a new cottage industry. They take stories from all verticals, entertainment, science, whatever. And they're putting them through these, I guess, programs and they're spitting out the, someone is basically an automated voice is reading the articles themselves. I mean, it's the same thing. But when you do a DMCA, there's sort of no way to do a screenshot to show YouTube, for instance. Look, they're even mentioning this site's name or they even mentioned quote by quote. And then what happens is sometimes the people say, I'm sorry, I won't steal. And the other times they file a counter claim. Who wants to litigate hundreds of these? Is there, if I can make a suggestion, be nice to have like a thing where you can upload where they're taking the exact verbiage and putting it through these audio programs. I've never seen that. But if you can send me some examples, I am happy to pass that on to the YouTube team as well. Sure. That's, I've seen some really creative people doing weird stuff on YouTube, but I've never seen them not like speak a website. That seems... We'll send you. We'll send you. I won't bombard you with a lot, just maybe two or three. Okay, that sounds great. Okay, thank you. Cool. Where's the, just on that, what's the violation there? If someone takes what you said, transcribed it. Is that, what's the problem? I understand it's a problem. I imagine if you're providing a service there, or they piggybacking, is that wrong? If they're doing something that you should be doing, what's the actual violation? It's copyright infringement because they're using your proprietary information and worse, worse for work on YouTube. They're transcribing articles. All right. So it would be a violation of what YouTube terms or yours? The originators, yes. So whether, if it's your written content and someone reads that out. That's complicated, yeah. That's what I'm asking. I'm not questioning one way or the other whether it is. I'm just trying to question where it is, if anywhere. Because you're not allowed to take other people's, I'm not allowed to record a football match and put that on YouTube and then claim that's my content. Because it's not, it's Sky or Fox or someone else. But when it's read out, it's a pay. I mean, God knows who's watching a, someone read a transcript anyway. Can't imagine, I love views of massive, but. I need to go. Sorry to cut you off. This sounds like a fun discussion for the YouTube forum. But I need to head out. Thank you all for coming. Thanks for all of the questions that were submitted. I hope you found this useful. And I'd love to, to get the additional feedback from the individual people as well. If there's something more that I can kind of forward on to the team. All right. Thanks a lot, everyone. Hope you have a great day. Thank you, John. Thank you so much. And Rob. Thank you.