 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Webmaster Office Hours where webmasters, publishers, SEOs, web developers even can join in and ask their questions or discuss topics all around web search. Looks like a bunch of questions were submitted already. But if any of you want to get started with the first question, feel free to jump on in. Lyle, I think you're talking, but I can't hear anything. Oh, man. Hello. Hi. Hey, John. Hey, everyone. This is Harinder. I'm with Adobe. I'm a technical SEO. And this is the first time I'm joining the Webmaster. So looking forward. John, I had a question about, so I have a website that I manage. So recently, not recently, last year in June, it changed from regular web listing to video carousel, one of the queries like stock footage or stock videos. That kind of caused huge drop in clicks. As a result, it was kind of misleading with video thumbnail because that result doesn't really match the intent of the user. As users were kind of mistaking the video carousel result as a YouTube video. So in order to kind of block it, we took some actions. We blocked the video using the robots. So TXT, we also kind of removed the videos completely from the page. That didn't have much, actually. The things actually got worse. Instead of picking up the video, Google started showing the image thumbnail in the video carousel with a timestamp of 15 seconds. I have no idea where that was getting pulled from. It didn't even have any video on the page. So that ultimately led to a very poor user experience as the video was nowhere to be found. The user saw the video on the search page. And once they landed on the page, there was no video. And that ultimately kind of led to a slight drop in rankings as well. So any suggestions? I know you tweeted out like last year, we should try adding a site map for the videos and setting expiration date in the past. That was one of the suggestions. And the other one was blocking that part of the page completely. So that's the one we tried. So the problem with that thumbnail URL is they are delivered via CDN. So we don't really control the URLs. So we can't really block them. So any suggestions there would be very helpful. Yeah, I think that's kind of tricky because there is no explicit way to say, I don't want my page to be shown like this in the search results. So what I kind of tend towards is trying the expiration date approach, which I believe you can also specify in structured data on the page. I think so, yeah. With JSON-LD structured data, there's a way to specify the expiration date as well for a video that we picked up on a page, I guess. So that might be something I try. What you can also do is if you want to send me that query and the URL involved, then I can take a look at that with the video search team here to see what could we be doing differently in the long run to avoid this kind of thing from happening. Is it just one page within your website, or is it something that happens across a lot of pages? It's just the one page, John. OK. Yeah, I can do that. I can definitely send that to you. And what's the best way to reach you via email? You can just drop it here in the chat as well. I can pick it up from here. Awesome. Thank you. Thank you so much for answering that question. Sure. All right. Any other questions before we get started? Oh, man, Lyle, I can see you talking, but I can't read your lips. Oh, man. You're not muted, so something with microphone on your side, maybe. But I mean, we can come back to you. So keep trying. Mess with the settings. Maybe it'll work out. All right, let's see. What do we have submitted here? Nearly all of the articles in our blog section have been de-indexed from looking at Search Console. It says crawl, but not index, and it's showing no specific reason why. What could be happening there? So it's really hard to say without looking at specific examples from your website. So what I would recommend doing here is posting in the Webmaster Help Forum with the exact URLs that you're looking at, the queries that you are looking at, so that folks there can take a look. Sometimes it's something simple, something technical, such as maybe your blog is set up in a way that is no indexing these pages by default. But usually the folks in the Webmaster Forum are very quick in recognizing these kind of common issues and can help you to narrow things down or to escalate them if there's really something completely weird that's happening here. Another thing to keep in mind is we don't index all pages that we've seen. It's really common for websites to have lots of pages on them that we know about, but we don't necessarily crawl and index. And that's not necessarily a problem that's just our algorithm is trying to figure out where it makes sense to focus our energy and where it makes sense to maybe focus a little bit less energy. And usually that also means that those pages are unlikely to be that visible in search anyway, so it wouldn't change that much for your site overall if they were indexed. But again, I would definitely check in with the Webmaster Help Forum to see what could be like some of the possible causes there. We added a bunch of pages to our site, but forgot to update the sitemap file. So Google found them and indexed them, but do we need to do anything special? And no, you don't need to do anything special. A sitemap file helps us to kind of add more information with regards to crawling and indexing. It doesn't replace the information that we have for crawling and indexing. So if it's just a matter of finding these URLs, if we've found them through normal crawling, that's perfectly fine. A sitemap file also shouldn't replace normal crawling anyway. So if none of these pages were linked internally, we might be able to find them through a sitemap file, but it would be really hard for us to understand the context of those pages. So it's actually a good sign in that regard that we're able to find those pages and index them anyway. A sitemap file would help us here to recognize when you make changes on these pages so we can pick them up a little bit faster. But if we're already indexing them, then at least that first step isn't something that you're critically missing. I checked our inbound links and found there to be lots of spammy links, which I added to our disavow file. I wonder how important it is for ranking to keep on top of this, as I heard Google ignores them anyway. So in general, you don't need to keep on top of this. It's something where pretty much any website, if you look at the inbound links, will have a bunch of links that are just spammy or irrelevant. And that's perfectly fine. We were pretty good at ignoring a lot of the kind of cruft that websites collect over the years. So I wouldn't focus on this. If it's something where you know that previously you went out and bought links or you had an SEO, do something really weird with regards to links, then that would make sense to clean up. But if you're not aware of anything and things are otherwise kind of OK, then I wouldn't worry about this. Does Google pick on whether we use Location Finder on a mobile version to give a better UX? Can this help with rankings? So I assume with Location Finder, you mean something where your website would recognize where the user is located and maybe show the local phone number or local address based on the user's location. That's not something that we would do with regards to search. So that's generally not something we'd even pick up on. In general, when Google renders pages, it also denies any of these kind of additional information requests. So that would be something where we probably wouldn't even notice that. Does link equity flow through a 302 temporary redirect? Oh my gosh, your question never goes away. So there are two types of redirects and there are multiple types of redirects. But the main ones are kind of the 301 and a 302 redirect. A 301 is a permanent redirect, which tells us that a new page is replacing an existing page forever. And a 302 redirect tells us that temporarily the content is available on a different URL. And from a, oh, we can hear you now. Cool. And from a practical point of view, what generally tends to happen here is with a 301 redirect, we'll focus on the destination page. We'll index that URL and move all the signals there. Whereas with a 302 redirect, we'll focus on the initial page and try to keep all of the signals there. So it's not a matter of page rank passing or not, but rather which of these URLs is actually the one that keeps the signals. It's not that these signals get lost with a redirect. It's more like, are they here or are they here? And that's kind of the main difference. So if you're tracking which of these pages is ranking, the redirect target, or the redirecting the initial page that is doing the redirecting, then with a 302, you probably tend to see the initial page ranking, because that's the one that we pick. Because you're telling us it's just temporarily the content is somewhere else. Whereas with a 301 redirect, we probably tend to rank the destination page more. And as always, people get this wrong regularly. And our algorithms try to figure out what it is that people are trying to do. So if we see a 302 redirect in place for a longer period of time, we might decide, well, probably the webmaster meant this to be a 301 redirect and will treat it as such. And we'll shift all of our signals over to the destination page instead. So that's something where you might see those kind of changes happening over time as well. So it's not that link equity or our signals flow through a redirect or not, but rather, which of these URLs do we end up picking for our indexing? And that's the URL that ends up getting all of these signals. Hey, John, that's Lyle. I think you can hear me now, but not see me. I guess that's better for this purpose. Basically, I had a quick question. As you know, we've been dealing for years with all kinds of strange ranking issues, via fronkarbyingtips.com. And obviously, we keep working to try to figure out what on Earth is going on. One of my partners, when he was dealing with some YouTube issues, had a potential theory based on disallowed words. For instance, our site sometimes would refer to a car salesman as a moron or something like that in some of our articles. And so basically, I just wanted to find out, could there potentially be an issue with using language like that on certain pages on the site? Like not profanity, but this insult type stuff like that. So you mean that we would rank a page lower because it's using informal language? Yeah, informal slash insulting. I mean, we still see the phenomenon where other websites that have basically stolen our content and then slightly modified things rank kind of like where we used to, and then we rank kind of bad on a lot of stuff. So we're just trying to cover all the bases. So that was one of the things. When they modify the content after they steal it, they don't tend to steal those pages on the site that have that type of language usage. So I don't think that would affect anything. OK. Yeah. OK. I mean, some pages have user-generated content on them, and they use this kind of informal language as well. So I think that's perfectly fine. OK. Thanks. All right. Sure. Go for it. Hello, John. Hi. Actually, I have a site. My question is about my site later. Actually, I have a blog, and I have a post about SEMrush. So I described SEMrush, and I have a link about SEMrush blog. So my question is about, is it considered as I am promoting SEMrush like affiliate marketing? So I have added no follow attribute in that link. So is it fine? That sounds good to me. OK. So another question is, I have a, that is too basic. I have just started it. That is about a cluster type linking and cross linking. I have a blog about SEO, and I am describing what is SEO, and that's a digital marketing content marketing, so like that. So my question is, I am linking my post. Those are relevant to my post. Another post, those are relevant to my post. OK, you got it. And so my question is, if I go to the A to B base, and B base go to C, and C again is coming to the A base, and B as well. Is it considered as a cross linking, or I can see it like cluster type linking? Because cross linking is bad for SEO. I think if this is all within your website, that's perfectly fine. That's it's normal. Because Google, actually, if they look for the web, they work like they index so many pages of that link to the many other pages. They found a link to one site from another many sites. So that means what I am doing, it's also right. As far as I can tell from your description, that sounds correct, yeah. Thank you, thank you so much, John. Sure. All right, let me grab some more questions that were submitted. And then we can get more to kind of live questions from you all as well. We added related articles to the bottom of our articles blog page. We make the link in the rel next point to one of these related articles. No. So the kind of the rel next linking is really just for paginated series. It's not something that's meant for kind of related links. So I will just cross link those related articles normally. Usually, that's what these plugins tend to do. A site we work with in the music industry purchased at the main last year, it doesn't appear to have any manual penalty shown in Search Console, but they also can't get any content to show in Google or any traffic from Google. We've tested the usual items, no indexing, robots, and can see the content, infection render. Is there any chance that there's a manual action applied that is not alerting us in Search Console? The coverage report shows all URLs are duplicate. Google chose a different canonical than the URL and shows NA as a Google selected canonical. I don't know. This sounds like a very specific case. So it will be useful to have the URL. Maybe you have a forum thread that you can link to, and I can take a look at that there. Oftentimes, the normal things, folks in the forums will be able to pick up on fairly quickly as well. But it seems like we kind of need to take a look there. I tend to suspect that there is more something technical happening, especially if we're seeing that the URLs are being kind of flagged as duplicates, then it sounds like more something technical rather than something manual on our side, but always happy to look at weird edge cases. Speed and security are becoming more important factors now that you plan to use security headers and DNS sec and the usage of CDN as a ranking signal apart from the data for speed. I don't know if it would make sense to go this specific into individual kind of elements that make up speed, but I could definitely see that we use, I mean, we do use speed as a ranking factor. So if all of these elements play into speed, make your website faster for users, then probably that's something that would be working for you there. But I don't think it would be the case that we'd say this specific technology is something that you must use because it's an actual ranking factor. It's more that we say, well, speed is important. How you achieve that speed is ultimately up to you. Maybe you use these technologies. Maybe you use other technologies. They're really cool fancy new ways that you can make a website really fast. And how you do that is ultimately up to you because from a user point of view, they don't care what technology you use as long as the page comes up very quickly. We use web.dev and achieved 100 out of 100 for SEO performance, best practice, and accessibility. However, we didn't see any ranking improvement. How long does it take? So web.dev is a great way to test your site for a lot of known issues and to compare against known best practices. But just achieving good results there doesn't mean that your website will automatically jump up in rankings above everyone else. So that's something kind of to keep in mind there. This is not the kind of final and ultimate way of doing SEO and ranking well. It's a list of best practices, and it gives you various things that we can test, that we can flag for you. So that's something where I think it's a good idea to look at these things, but you need to be able to interpret what comes out of them, and you need to realize that there's more to ranking number one than just kind of fulfilling a set of technical requirements. What's happening with mod page speed? I don't know. So if you're interested in that, I'd recommend posting in their mailing list. I believe they have a mailing list, or at least for page speed in general. So see what the folks there say. Recently, I'm researching a way to serve static content with serverless cloud infrastructure produced by dynamic CMS, lots of fancy words. So we can keep the great and easy to use back in the WordPress, but serve the content as static HTML for users, which will dramatically increase performance and security. As Google and Automatic are partnering now, and WordPress is powering a lot of the web, do you think there might be a non-amp way of doing this instead of geeking out? I don't know. So I think just taking a step back and looking at your suggestion here, it sounds like you're already pretty high-tech geeking out with regards to ways to serve web content. So you're probably already in that kind of top percent of people who are doing fancy things and knowing what they're doing, hopefully. I don't know of any specific way to set that up with what you're looking at there, but in general, our testing tools work for any kind of web content. So if you can serve your content using whatever infrastructure that you think makes sense for your site, whatever backend you think makes sense for your website, and you can use our testing tools to confirm that Googlebot is able to see that content, then that should work out. So it's not that Googlebot would specifically be saying you need to use this infrastructure and do it like this so that we can do it, but rather you can use whatever infrastructure you want. And as long as Googlebot can get to that content, then you should be all set. Let's see. And there's a video question that we looked at briefly in the beginning. You mentioned in previous Hangouts that the latest round of performance-related changes do result in gradual rankings penalty as your website gets incrementally slower. We're working on improving our site speed. Do you know how long it might take for Google to notice these improvements once they're out? So as with pretty much anything related to web search, it's not something where there's a fixed time frame involved, but rather we crawl and index pages over time. We update kind of the signals that we have for these pages over time. There's no fixed timeline. Some of these pages and signals get updated every day or even more frequently. Some of them take a little bit longer. Some of them take months to get updated. So what you probably see here, if you make significant improvements within your website, then you'll see over time this kind of gradual rise with regards to us taking those signals into account. It might be a bit tricky when it comes to speed in the sense that speed is not the most important ranking factor. We do look at things like content and try to figure out which of these pages are most relevant to users as well. So if a site is really fast, that doesn't mean that it's always ranking number one. Theoretically, an empty page would kind of be the fastest page, but that doesn't mean it's a good result for users. So speed is more of a smaller ranking factor there. So that's something where probably you'd see bigger changes in your site's visibility and search over time based on improvements in the quality and improvements of the website overall. And speed is more like something small that would probably even be hard to measure individually. Does Google take into account the home address when accepting or rejecting Google AdSense requests? My roommate has the same address, and her AdSense request was rejected. I have no idea how AdSense handles these. So I'd recommend posting in the AdSense Help Forum on something like that. Maybe someone there knows a little bit more. Does Google consider a backlink in rankings of videos in the video carousel on Google? I don't quite understand how you mean that. If you want to frame it in a different way, maybe try to do that. Otherwise, it might also be something to discuss in the Webmaster Help Forum, where you can elaborate a little bit more of what you're thinking of there. Hey, John, can I jump in real quick? Sure. Question, does the algorithm possibly associate negative factors with either people or organizations and then downgrade unrelated websites that are associated with the same people? How would you mean? Basically, we've had the issues with car buying tips, and then one of my partners and I have a hobby-type site related to stone crab fishing. And it had ranked, when we put it up, it was ranking in the top three on anything related, and then it kind of disappeared from the rankings. We posted a question on the Webmaster Help Forum, and within minutes, the top contributors all started focusing on the fact that the site had associations with car buying tips. And so I wanted to see if it's possible that there's some kind of a negative connotation somehow attached to us personally. I can imagine that there's something attached to you personally where algorithms would say, oh, man, this guy. Again, so usually where that comes in is if we see a website and it's kind of well interlinked in a set of sites that are all problematic, then that might be something our algorithms would say, oh, we've got to be careful here. All of these websites are problematic, so maybe this new website that's also part of this set is also kind of tricky. But if it's a matter of these websites just being on the same server or with the same owner, that's usually not a problem. It's also generally more of a problem when it goes into the direction of doorway pages, where maybe you're creating a new website for hundreds of different cities across the country, and essentially all of these pages are the same. Like all of the websites are the same. And that's something where our algorithms might say, well, doesn't look like a lot of value for us. In a case like this where they're completely unrelated to topics, they're not linked to each other at all. I don't see a problem with that. Yeah, yeah. OK, thank you, John. I have another question, John. All right, go for it. Actually, I'm using the Blocker platform. So I am getting the notification that alternate pages with a canonical tag. OK, so all pages are with a mobile parameter. So what is my question, sir? Is that you need to fix it with the Blocker, or is it just as normal? I have to leave it. I don't know in your specific case, but probably you can just leave it like that. OK, I can leave it. Your problem, yeah. OK, thanks. I think Blocker adds the question mark, m equals 1 to the URL. Yeah, m equals 2, 1. Yeah, I don't see a problem with that. That should be fine. OK, OK, thank you, thank you so much. I just noticed lots of stuff happening in the chat here as well. There's a really long question from LSBio. I feel this would probably be easier to go through in the Webmaster Help Forum. If you could copy and paste that into the forum, then folks there can take a look, and we can look into some of the details as well, because it seems like it's very specific to a batch of URLs and to a specific setup there. And that's really hard to discuss in a live situation like this. Well, just a clarification, basically what happens. These are pages that are the result of a user on our site selecting their biology things, right? So they select a configuration of a gene and reagent that goes along with it. And what then happens is that we create a search page, effectively. It's a filtering process. We create a search page, and then, of course, it refers to a product page, and they click on that. Those referral, those filtered pages end up periodically in Google Webmaster Tools. And there's not just a couple. It happens thousands at a time. And so one of the things we were wondering is we do have it in our robots.txt file, and we believe it's correct. Is there, do we need to do, I'll call it belt and suspenders, and also make those a no indexed command at the beginning of the page or something like that that would help that? It just, it makes it difficult to use Webmaster Tools sometimes when it's full of that kind of information. Finding the needle of the things we really have to take care of is masked by the sometimes over enthusiastic Webmasters tool of capturing everything we do. So where are you seeing these in Search Console then? Well, I'll say I'm in either, mostly in the performance, the new performance tool. And it will be an anomaly or a soft 404, one of those too, because it tries to go back and find it. And of course, you can't find it, so. OK. So a no index would be an option here as well, but then you would have to take it out of the robots.txt so that we can see the no index. Ah, OK. I wonder if that's already happening to some extent here, because we wouldn't be flagging it as a soft 404 in Search Console if it were completely blocked by robots. So that's something where maybe we're already able to kind of crawl those pages, and then we say, oh, we probably don't need to index these. Therefore, we'll let the Webmaster know that we kind of stumbled upon them. So that might be something to double check that they're actually blocked by robots.txt. We're pretty sure it is. It's just I think it's also happens, and then they disappear, and then it happens, and it disappears. So maybe it's just a matter of the processing time in between the two, which is they don't. I mean, they're not really pages, so they don't have any. We don't want them in the index because they have no title, no H1s, none of that stuff, because they're really not pages. They're just the results of a user asking to configure a gene and one of our products. Yeah. So in that case, I would just leave them in the robots.txt. Leave them blocked. I think that's perfectly fine. OK. There's no real way to kind of block them from appearing at all in Search Console, but I think having them in the robots.txt file is perfectly fine. All right, cool. And oh, by the way, when are they going to put the robots.txt tester tool in the new Webmaster tools? I don't know. I don't know. We haven't announced that yet, but we're trying to be a little bit ahead of kind of the turning things down so that people have a chance to move to something new, to move to the new tools in Search Console. So as soon as we have more plans on what is happening there, we'll let you know. I think this is one of those tools that makes sense to me. So since we haven't announced that we're turning it off, I imagine it'll just come with the new Search Console over time. Cool. Thank you. Hey, John. Hi. Real quick, I have a client going through a site migration. It's a large enterprise website where they're going from multiple subdomains to one. And right now, individual business pages and content is across several branded subdomains. Each business has the same template with one main page and several subpages. And the single domain experience will consolidate that business template down to maybe one or two to three pages. There's a lot of content consolidation. They'll be doing this in phases, and it'll still be a large scale. But since we're talking about so many multiple page experiences going down to one, potentially, for those to be retired or redirected business pages on old subdomains, I've been thinking they should intentionally orphan those first before pushing through so many redirects at once to the new consolidated experience, almost to let the desk settle. But is it the right approach, or is it just best to redirect those old pages to the new relevant and compact experience and then let the desk settle from there? I would just try to go with the final state as quickly as possible now. So instead of creating this temporary situation where things are neither the old one nor the new one, I would try to just redirect to the new ones as quickly as possible. OK, great. Thanks. Cool. All right, let me see some more questions that were submitted. And we should still have some time for more questions from you all as well. The export in Search Console is not working well for German users because the number format messes with commas and transforms numbers into dates. I don't know. First time I've heard about that. So I'll take a look at that. If you have some examples that you can send me or that you can post a link to in the Google Plus thread, that would be really useful. Then I can take that to the team directly. If it's dynamic serving and a site owner wants to set up an AMP page, also with the same URL and a desktop page, does he need to add an AMP HTML page? So the same URL for mobile desktop and AMP. So I think, first of all, you wouldn't be able to use the same URL for mobile and AMP if you're serving different HTML because the same user would be going to that page and you wouldn't know which content to serve. So that, I think, wouldn't work. However, you can, of course, make an AMP page and just say the AMP page is my normal page. That's a perfectly fine setup. For example, in the new WordPress AMP plugin, I believe there's an option. I don't know. It's called Native AMP, I think, where basically your website is purely an AMP page. And that's a perfectly fine setup. So in a case like that, you would, I believe, set the AMP HTML tag to the same URL so that we know this is meant to be the AMP page. You would also set the canonical to the same URL so that we know this is the canonical that you want to have indexed. And then we'd be able to pick that up. Do backlinks help in the rankings of videos in the Google search carousel? So we use a number of factors when it comes to ranking. And that does include links. So it's something where I would be wrong to say, like, we don't use links at all. However, we do use, like, I don't know, over 200 factors for crawling, indexing, and ranking. So focusing on links alone doesn't really make sense. So that's something where good content traditionally picks up links on its own. And if you're creating video content on your website, then all of that is generally also interlinked within your website. So over time, these things kind of settle down on their own. It's not that you need to explicitly build links so that you can show up in a video carousel. I think that wouldn't make any sense. How does Google treat sci-fi navigational links that are hidden on mobile resolution in responsive pages but visible on desktop? We would probably follow those normally. So I don't see any problem in that. That said, if your website is hard to navigate on mobile, then users who are, for a large majority of the sites, are mostly coming on mobile. We'll have a hard time navigating your website. And we'll have a hard time finding your other good content. So I would certainly make sure that any UI that you have available on your website is also available in some form or another for mobile users. Recently, we learned that many of our pages have not been shown in safe search results. These pages are mostly destination pages of cities and countries. I've been keeping track of some of the keywords. For example, gay Barcelona, or generally gay destination city, where in the past five days, our Barcelona index page showed up in first position on safe search. But then has disappeared from safe search now. We always make sure that there is no explicit image or profanity in the content, but it hasn't guaranteed our position in the safe search results. Could you explain how the safe search algorithm actually works? So we use a number of factors in figuring out when to show which content to which users in the search results. I don't think there is like this one simple thing that makes safe search work or not. I suspect with a website like yours, it'll always be kind of tricky for algorithms to figure out what exactly we should be showing here and how we should be showing that in the search results. I think I passed your website on to the team here wants to take a look at it as well. So I can definitely double check with them. But I imagine it'll always be kind of tricky and a bit borderline for our safe search algorithms to figure out how we should be handling this kind of website, which is always, I think, a little bit unfortunate. But it's hard to find exactly the right balance there. I have two websites that offer very similar content. Some of it is even duplicated, but only one will verify for Google News while the other one won't. The noticeable difference is one has a health and fitness section while the other one focuses more on lifestyle content and celebrities. Like, why would one be accepted into Google News and not the other one? I don't know why that might be happening. I don't know the Google News policies, specifically, in that regard. So it's really hard to say. In general, though, if these websites are so similar that you're saying some of the content is even duplicated, maybe it makes sense to just focus on one website rather than to have two websites that are essentially duplicated or very similar, targeting the same audience. But that's, I think, more of a question in general for you to consider. With regards to Google News, specifically, I would recommend going to the Google News Publisher forum and double checking with the folks there. The experts that are in the forum there have a lot of experience with sites that are accepted to Google News and sites that get improved so that they do get accepted and sites that wouldn't get accepted to Google News. So they can probably give you some tips with regards to what to watch out for, specifically, for Google News. I have a hotel website. And even if you search for it with its exact name, which is unique, I can't see it in the first page of search results. My Google My Business listing has been punished for two weeks. This listing was closed by admins. And I sent the documentation. The underserter was mistaken and reopened it. I suspect this is the reason why my website can't be found in Google. So I don't know about the specific case here. So that's really hard to say. But in general, just because a website isn't in Google My Business wouldn't mean that we wouldn't show it in web search results. For the most part, the web search results are independent of the Google My Business listings. Obviously, if it is in Google My Business and we show it in that Maps listing, then that would be one place where your website would be visible. But just because it's not in the Maps listing doesn't prevent it from appearing in the normal search listings. So my suspicion is that there's probably something else that you could be focusing on or that you could be looking at there. And like in some of the other cases, I'd recommend going to the Webmaster Help Forum and getting some input from other people who've seen a lot of these cases and might be able to help you figure out what you could be doing there to improve a little bit. I noticed Google rewrites some titles and meta descriptions. Any idea of how to know if Google will rewrite the content or keep the original version? We do have guidelines for how to make good titles and good descriptions. So I'd recommend taking a look at that. Oftentimes, when I see Google rewriting the titles or the snippet, that's usually more based on situations where we see kind of almost like keyword stuffing happening with the titles or the description. So that might be one thing to kind of watch out for. Another thing to keep in mind is that we do try to pick titles and descriptions based on the query. So if you're doing a site query and you're seeing your titles and descriptions in one way, that doesn't necessarily mean that they'll be shown the same way when normal users search with normal queries. So I kind of take a look at both of those and also definitely make sure to check out the Help Center article. How does Googlebot view website personalization? We have a new product that layers the website content to allow personalization based on industry, location, or even to a single company. This allows us to really bespoke content to individual end users. My concern is I'm showing the original on-page content to Googlebot and personalized content to end users. Will this affect our clients negatively? Maybe. Maybe it will. So the thing to keep in mind is Googlebot indexes the content that Googlebot sees. So if you have something unique that you're showing individual users and Googlebot never sees that, then we wouldn't be able to index that. And we wouldn't be able to show that website in search for those queries. So for example, if you're recognizing that a user is from the US and you show English content and you show a user from France, French content, and Googlebot crawls from the US, then Googlebot will only see the English content and will never know that there's actually French content on this website as well, because it would never be able to see that content. So that's something kind of to keep in mind here. If you're just doing subtle personalization, like maybe adding related products or maybe adding additional information to the primary content on your page based on location or other attributes, then we'd be able to rank the page based on the primary content that we can see. But we still wouldn't know what kind of this additional layer of information is that you're adding to those pages. So it's not a matter of us kind of like penalizing a website for doing this or causing it any problems. It's more a practical thing. We can't see it, so we don't know how we should rank that. OK, looks like we're kind of running towards the end of the session, so maybe I'll open things up a little bit more to you all again as well. So if there's anything on your mind, feel free to jump on in now. John, can you hear me? Yes. Thanks. I can hear you, but I cannot find my question. You read my question about my hotel website and did you remember about my business? And I have two small questions there as well. My hotel website has a domain name, .com, .tr, .com, .tr. And therefore, Google automatically makes its region turkey in Google Webmaster Tools. I can change it, but my customers are mostly people from UK, for example. And I want to be listed if searched from UK. Do you think that changing the domain name is needed if I want to be listed people from Europe? You could do that. In general, if so, we use geotargeting to recognize when a user from a certain country is searching for something local, we'll use geotargeting to try to highlight those pages for those users. So if a user in the UK is searching for a hotel in Turkey, then there's no need to do geotargeting, because a website in the UK would not be relevant for a user searching for something explicitly in Turkey. So I think for the most part, you wouldn't need to use geotargeting there. Yeah, I don't want to use geotargeting, but it automatically makes turkey since my domain is .com, .tr. Yeah, so I think that's perfectly fine. A website where we recognize turkey as the country can still be relevant globally. So unless you're like, let's say you're, I don't know, a pizzeria and you offer a special Turkish pizza and you deliver it to users in London, then a user in London searching for pizza in London, then your Turkish website would have a problem. But if a user in London is saying, I want to search for hotels in Turkey, then your website is perfectly fine. Understood. And last thing, please, domain who is info? Is it important what is written there as a company name in domain who is info? No. No. OK. Thank you very much. Then it is a myth. Yeah, I read it somewhere. Then thank you very much. Sure. Fantastic. Bye-bye. Thank you. I've got it. I'm sorry. John, it's Michael. First of all, I want to thank you for introducing a new word into my vocabulary, the word cruft. Thank you. I will try to use it in the sentence with my tech team this week. So I think there's kind of a general question for a lot of publishers. You'd be surprised how many people create a ton of useless tags. And this was something that we realized a few years ago. Like somehow or another, we had thousands of tags. Then we had a tag for the neighbor's best friend and the guy who. So we worked really hard. And we got down to a few hundred tags. And we were very, very happy with how we did that. And now I see, even after that, a few of the things that we bucketed and did for 301 redirects, like now I'm still seeing like, you know what, we don't cover, for instance, celebrity crime anymore or celebrity divorce as we just do the fact checking. What happens then when you've got all these redirects to this one sort of tag, which is no longer that applicable? Does it just die out? Or do you think of a, or redirect it some other way? Cause I can't really think of a way to redirect, let's say celebrity divorce or health anymore to something. And yet we have like, let's say a hundred stories to that tag at this point. I think that's kind of something that can naturally evolve over time. So if you start having more content there and you want to kind of revive that tag, that's perfectly fine. If you want to kind of combine things even more, that's perfectly fine as well. These kind of almost category pages, they kind of evolve over time. I think that's normal. But what happens if that category is just no longer useful anymore? Do I just let that tag just sit forever? And I mean, it's fine. But I'd like to obviously consolidate as much as I can, make everything tighter. I think consolidating is fine. Another option where you might say is maybe even no index makes sense. Like we don't really want to be indexed for this content anymore, we want to keep it on our site. If people know that it's there and they specifically search for it within your website, that maybe no index would kind of like help Google or other search engines to focus more on the indexable content. Okay, great. Thank you. Hey, John, just a real quick follow up. I put this in the chat related to the two different websites that have shown the same behavior, which was basically ranked really well then went into omitted results and then kind of came back into the 30s, 40s. So I dropped the URLs in the chat. So basically there's got to be something that we're doing some kind of technically obscure thing or something between the two that's in common since they're behaving so similar. Yet all the Google tools, mobile friendly, et cetera, et cetera, I'll say everything is fine. And like I said before, we tried posting in the webmaster help forum and everybody focused on the association between the two sites and the ownership. So basically I just wanted to see if there might be some guidance you can provide on how we can figure out what could possibly be causing this same bad behavior on two unrelated sites. Yeah, I don't know offhand. I'd really have to take a look there. But I copied it down so I can take a look there. If you have a link to the forum thread, I can take a look at what people were digging into there as well to see where they were on the right track and maybe where things are not so much on the right track. All right, I'll see if I can find the forum thread real fast and get it in the chat before the end. Thank you. Cool. Hi, John, one question for me. Okay. If we use Noindex for some pages on the website, but if we up for, can you use the content to calculate the thematic on the website? If it's a Noindex, then we would not index that content. We would not use it. Okay, thanks. Yeah. Just want to do a follow-up. In December or maybe late November, I presented the problem that we sell these ingredients for biological researchers. And it's pretty clear that the search process for product like a word-like insulin, which is a human hormone, we're being treated that we're selling medical. It's recognizing that we're theoretically selling medicine. Sorry. And would like to be able to, you said you were going to show it to your folks and I just was going to do a quick follow-up and sometime, if you want examples of how it's misreading things, I have plenty. Okay, I need to head on out, but I'll double check. Thank you very much. All right. Thank you all for joining. I'll be back, I guess, with the next Hangouts on Friday. So feel free to drop questions there as well. Until then, I wish you all a great week. Bye everyone. Thanks John, I dropped that so much. Thank you.