 Just a second. Hi, everyone. Welcome to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts for webmasters, SEOs, publishers, anyone who's kind of interested in search, has a website, and is seeing some questions there. As always, a bunch of questions were submitted. But if any of you want to get started with the first question, you're welcome to jump on in now. I would love to. OK. So I have this client who was asking me about the movie Showtime Rich Snippet that Google shows sometimes for certain branded keywords for the cinema name or movie. And I was reading about it, trying to find out how to find them. We initially thought schema and a good table set up on the front end, and maybe a new Showtime link building strategy to complement it. But we heard about this private API or something that might be what's pulling out the results for the one box that Google shows. We found this company called Webedia that actually works with movie database. So I wanted to know if we as webmasters have the chance to find these rich results by implementing schema plus the tables or some good practices. So Showtime is the movie schedule, right? Yes, correct. I don't know how that's set up. I don't think it's related to a specific structured data type. At least we don't have it documented in the developer center, but rather, as far as I know, it's just separate feeds from these bigger companies that are active in those niches. I can double check, but I don't know if I have anything specific I can point you at. So if you want, you can drop your email address. Maybe if you have a link to help forum posts, it kind of elaborates on what you're trying to do. Feel free to drop that into the comments here in the chat, and then I can try to follow up on that for you. Absolutely, John. OK, thank you. I'll drop the we have a Webmaster Forum thread about it, and I will post my email. OK, fantastic. Thanks. All right. Any other questions before we jump into the submitted ones? OK, fine. Let's see if I can pull it up. Quick question, should we use hreflang annotation in non-canonical URLs, or must we only use it for canonical URLs? So the short answer is you don't need to, and it doesn't cause any problems if you were to use it. Kind of the background there is if a URL is not seen as a canonical URL, then we would not show that in search. So therefore, any hreflang annotation that's associated with that URL would not be used. So from that point of view, if we wouldn't show the page in search anyway, then the hreflang annotation that you add there doesn't really change anything. So you can, if you have the markup there already, but I wouldn't really spend any time on adding hreflang annotations to URLs that you know are not going to be canonical. We have some links at our site to other internal pages. These links are shown as a pop-up with five to 10 lines of good quality explanation to the subject in the link. The link is marked as nofollow. I have a feeling that this is not good SEO, both the pop-up and nofollow. I would prefer to show the content in the pop-up on one normal web page instead of five to 10 links with pop-ups. I suspect your suspicions are correct. So I think there are a few aspects there. On the one hand, if these links are with a rel nofollow attached to them, then we would not follow them and not use them to crawl and pass signals across the rest of your website, which is probably not something that you want. You probably want us to crawl all of these pages and to recognize how important they are and to show them more in search. So in general, for internal links, I would recommend not using the rel nofollow, unless you have really good reasons to do that. The other thing with regards to the pop-ups, I don't really know exactly how you mean that. So that's something that is kind of hard for me to say. But in general, you're welcome to link to other parts of your website. If you have a pop-up within your website pointing to other parts of your website, usually that's fine, as long as that works for your website, as long as that works for your users. So that's something I'm not really too worried about. It's really more the matter of using nofollow for internal links for things that you do want to have indexed and shown in search. We're working on a page speed initiative to defer images that are outside of the viewport on page load per the recommendation of page speed insights. Our concern is that the crawler will have trouble finding images that are below the fold as they don't show up in the DOM on the initial page load. How should we balance the recommendations of page speed insights and the ability of the crawlers to find pop-ups? There's a bit of a background noise somewhere. Let me just mute some of you. See if that works. Perfect. OK. So with regards to images, I think there are a few aspects. On the one hand, you need to think about which images you really want to have indexed. So just because you have images on a page doesn't necessarily mean that they're useful for your website with regards to image search. So think about how people might search using image search to find your content and how they might want to search for things, which images they'd like to see, and how that can drive traffic to your website. So that's kind of the first thing is like you don't really need every image on your website to be indexed in image search. A lot of times images are more decorative and not something that actually drives traffic. On the other hand, there are multiple ways that you can use lazy loading to kind of defer the loading of the images in a way that still works for Googlebot. So I double-check our guidelines on that. That's something where there are different ways that you can do that and ways that would work for Googlebot in particular. And with regards to the initial DOM of the page that's less of a worry. It's more a matter of when we render the page, do we find the image tag that links to your images? And if we can find the link to your images, then we essentially have everything anyway. But those are kind of the three aspects I'd watch out for. Kind of just avoid just generally indexing all of the images just because you can. And then double-checking the way that you're using deferred loading of images and then making sure that those images that you do care about are linked in a way that we can discover. How to affect the site with a DMCA notice? We have two about content that is legal, but we don't have the resources to protest. So we have removed the content, but not the DMCA notice. Is that affecting Google SEO? So I can't give you legal advice with regards to the DMCA process. So that's kind of one thing to keep in mind here. My understanding here is that if there's a DMCA complaint that is active, then we will remove that URL from search and show that notice in the search results appropriately. Within Search Console, I believe you have a link to kind of go through the process of claiming that this DMCA complaint wasn't erroneously and that you disagree with it. At that step, once that has happened, it's essentially up to you too. So kind of you as a person who published that content and the person who submitted the complaint initially to figure out what the solution is there, usually, as far as I know, that goes through the normal legal channels. From search point of view, if you're talking about two DMCA complaints that have been submitted for your website, then generally, that's not a big thing. We will filter out those two URLs from search and the rest of your content is the rest of your content. So that's not something that is really big problematic. We're only receiving traffic from our brand keyword from two months ago, but we're new in the top stories because Google hasn't indexed our news for two years in the carousel. We changed our directory from slash notices and now we have top stories carousel, but only when we search for kind of the brand name. What can we do? Is there something more wrong? Usually, that can be completely normal. So these top stories carousel is a normal organic search feature. There is nothing specific tied to things that you need to do on your website in order to be shown there. And in some cases, we show sites for some specific queries. In other cases, it makes sense to kind of broaden that out and show these sites for more broader queries. But this is a normal organic search feature. There is nothing kind of manual about that where you need to tweak things manually on your website to make it appear there. It's more a matter of we can crawl and index your content and therefore, theoretically, we could show these pages in the top stories carousel. Sometimes it makes sense for algorithms to pull in these kind of pages in the top stories carousel and sometimes it doesn't. There is nothing really special in regards to that. When we search for a brand keyword or a digital newspaper, we found a search results page with our address. Can we remove it? It's not interesting for readers. We're not a local business. I don't really know how you found an address. So if that's something that's on a specific website, then obviously you'd need to contact that website to have that removed. As far as I know, especially for businesses, it's not something where we would say remove a random web page from the search results because it has an address of a business on it. So that's something where I'd recommend trying to find a source of that address and just having it removed there. If it's something that we're showing in the search results, usually we pull that information in from the normal web content. So I would, again, try to double check where that is coming from and try to remove it from there. We're about to launch some new brand pages to showcase the various different brands that we sell. There will be subcategorized in brand and product line. However, we also manufacture and sell items under our own brand name. I'm concerned that our brand, new brand pages will compete with the main category pages that have all brands listed. I've been looking at how other retailers do this. And while M&S do seem to manage it, well, there are not many examples out there. I was thinking of setting our own brand pages as no index or with canonical to the main category page to prevent them from competing. These are ultimately to help users to find the products that they want. But it would be counterproductive if the main products all rank as a result or all tank as a result. Do you think that's a good idea? I think that's always worth considering. So if you have different pages on your website that are essentially available for ranking for the same keywords, then theoretically, they can compete with each other for those keywords. Theoretically, it can also happen that multiple of these pages show up in the same search results page, which, on the other hand, might be fine. So I think it's fine to look into that and to consider, does it make sense to combine things into one page? Does it make sense to just have one of these pages indexable? Or does it make sense to have these different pages indexed separately because maybe they fulfill a different need or a different kind of query? So that's definitely something I would look into. That's it. I don't think there's really a clear kind of perfect answer for this situation. It really depends on the website, on the type of content that you have on these pages. For example, if you're the manufacturer and this is your own brand, then chances are not a lot of other sites are going to be trying to rank for those keywords anyway. So while you might be kind of competing with yourself for those keywords, you're competing at a very high level. You're kind of competing for position one, two, or three. And these are all pages within your own website. So it's not that your main pages would suddenly disappear when someone is searching for your brand. It's more that, well, there are multiple pages now that are showing in the search results when someone is searching for your brand. On the other hand, if you're competing with a lot of other websites out there and they're all trying to rank for these kind of brand name terms as well, maybe this isn't your own brand, maybe you just have a lot of resellers that are really strong, then that's the kind of situation where you might say, well, there's a lot of competition out there. For us, it's a matter of kind of ranking number three, number four, versus ranking on page two somewhere. If we kind of dilute things into two or three separate pages. And in that kind of situation, it probably makes sense to concentrate things on one page which you could do by no indexing the other page, by using a real canonical, by just having one page overall. Any of these kind of mechanisms to combine multiple pages for indexing, that might make sense there. So that's, again, something where I don't have an absolute answer, but where you can kind of look at the situation yourself and see, is there a lot of competition here? Do I have to make sure that I have really, really strong pages? Or will my pages rank anyway? And it's more a matter of which of these pages I want to have shown in search. And then maybe it's not so critical that you combine things and make fewer pages that are a bit stronger. Let's say someone wants to launch a new website in a competitive niche where the search results are dominated by 10 to 15-year-old sites with thousands of links. Let's say the website owner is convinced you can do a better job than them by providing visitors amazing content with great user experience. Does the webmaster have a chance in a reasonable timeframe? I don't know. Maybe. So I mean, there are always these storybook examples out there where someone comes charging in with something really fantastic and it blows everything else away and they perform really well. Search, they perform really well in business overall. It's possible. It doesn't mean it's going to be easy. So in particular, if you're looking at the search results and these are companies that have been actively working for those 10 to 15 years on those websites and they're really strong at competing with each other for these kind of search results, then I suspect that's going to be really, really hard. On the other hand, if these sites are out there and they were put up 10 to 15 years ago and kind of left to kind of hang around for a long time, then maybe that's something where you could do something completely novel and kind of turn that upside down. But it's not something where there's a clear answer where I'd say it's like it's never possible or it's always going to be possible. Sometimes it's just very hard, a lot of hard work. Yeah, it's really hard to see. I wouldn't assume that just because other sites are out there longer and have a lot of links that it's impossible to rank instead of them. Question about flexible sampling. Let's see. If we use a lead-in approach rather than metering and we enclose a paywall content with structured data, does this mean we can provide the entire article to Googlebot for indexing and not have it considered clothing? Yes, that's essentially how the flexible sampling approach works. You can provide the content to Google with the appropriate structured data. And when you feel that it's appropriate, you can show either the lead-in or you can have kind of the metered setup that you say, in this particular case, users are able to see this much content or maybe they're only able to see a certain number of pages. That's totally up to you. So that's something that you can do with flexible sampling. It's always a matter of, does it actually work for a website? Does it work for your business? So that's kind of more the angle that I would worry about there. If users come to your site and they can't read any of your content and they can find just as much good information out there on the rest of the web, then probably they're not going to jump through hoops to kind of get access to that content. But on the other hand, if they've seen that there's something really useful on this website and that they see either from the lead-in approach or from kind of the articles that they've been able to access already, that this is something that's worthwhile to go through these extra steps of registration or payment or subscription, then that sounds like a good idea. My recommendation there would generally be to experiment with this. So don't just go all in and do it like in the extreme case that you only have a line of text from all of your content. But really kind of double check to see what works well for your users and for your specific content. Nowadays, everyone talks about user intent. If a page is blocked by robots text and it's ranking, how does Google determine the query relevancy with page content as it's blocked? If Google is using other signals, then what happens if the site owner will replace the entire content? For example, if pornographic content is added. So if it's blocked by robots text, obviously we can't look at the content. So we do have to kind of improvise and find ways to compare that URL with other URLs that are kind of trying to rank for these queries. And that is a lot harder. And because it's a lot harder, it's also something where if you have really good content that is available for crawling and indexing, then usually that's something we would try to kind of use instead of a random robotic page. So from that point of view, it's not that trivial. We do sometimes show robotic pages in the search results just because we've seen that they work really well. In that when people link to them, for example, we can estimate that this is probably something worthwhile, all of these things. So it's something where as a site owner, I wouldn't recommend using robots text to block your content and hope that it works out well. But if your content does happen to be blocked by robots text, we will still try to show it somehow in the search results. I asked a question about subdomain leasing in the last Hangout, but they have two more questions. I think your answers would be interesting to the wider Webmaster audience, especially as you talked about changing domains recently. A few months ago, the owner operators of some coupon white label sites that were at least subdomains changed. It seems even though all the content was fully replaced, there was no significant effect on the site's rankings in Google. So my questions are, how does Google normally treat cases where the complete content of a website or URL is replaced with similar but completely new content? This is actually pretty common. So I mean, probably not that coupon sites are swapped out, but that you replace content on a web page is a pretty common situation. So it's essentially normal website revamp. Sometimes it's just content that's rewritten. From our point of view, we take the signals that we have associated so far with that particular page, and we look at the new content, and we associate whatever new signals that we can pick up from the new content with that page as well. So if things are just slightly shifting, so if you take an article on your website and you just rewrite it to be more modern to include the modern words, to include more modern information, then we'll take that more modern version into account and use that for indexing. Kind of makes sense. The same thing would happen here. If one coupon page content is swapped out against a different coupon page content, it's essentially a way of rewriting a certain page. So that's not something I would see as being particularly unique in that sense. The important part when you're doing a revamp like this with your website is ideally to keep the same URLs as before. That makes it a lot easier for us to associate the old signals with the new content. So if you're making significant changes on a website, try to keep the old pages as much as possible. If you can't keep the old pages, then redirect pages that have changed to the appropriate new URL. How does Google normally treat cases where the owner operator of a website changes? So if we're looking at a general business or general website, sometimes ownership situation changes and essentially the website itself continues to run in the same or pretty much the same way. And that's not really something where Google would get involved in. In that sense, it's kind of like you have one company. It's running a website. And that company gets acquired by a different company. And now, theoretically, the owner of that website is that other company. But everything around that website is essentially still the same. It doesn't make sense for Google to change anything significantly in their index in the search results just because of some ownership change there. I think it would be different in situations where, say, one website expires and someone else picks up that domain name and puts a new website on there. Then that kind of makes sense to treat that differently because it's essentially a new website instead of the old one. But if you're just changing ownership, that's kind of normal thing to happen over time. It's not anything unique that, from my point of view, we'd need to treat differently. I also think it would be really hard for Google to try to pick up these kind of things because we don't really track who exactly owns which website or which part of a website and how does that evolve over time. I recently started working on a website where pages in one of the languages don't get indexed at all. So the question goes on with a bunch of examples. I looked into this briefly. One of the things that I noticed is every time I try to access the Polish version of a page, it redirects me to an English version of the page. And I noticed that this also happens for Googlebot as well. So when we try to render one of these Polish pages, we get a redirect to the English version, which is probably done because you're recognizing that it's maybe an English-based browser or not a Polish-based browser. And it makes sense to show the English version of the page instead. In practice, what happens there is we only see the English version. We never really have access to the Polish version. So we can't really index that properly. So what we usually recommend for cases like this is if you have a home page that does this kind of adaptive language switching, then that kind of makes sense. But you should have individual landing pages for the individual languages so that if someone explicitly goes to the English version, they'll always get the English version. If someone explicitly goes to the Polish version, they'll always get the Polish version. And then when you have it set up like that, you can have the home page doing that automatic redirect. And you can use the hreflang annotations for that. In particular, for the home page, you would have xdefault to say this is the default version where I do my fancy redirect. And then you have the English version where you say this is my English version with hreflang and the Polish version with the Polish version for hreflang. And then when we look at that set of pages, we see there's an English version, a Polish version, and the default version where if we don't know where to send people, we'll send them to the default version. You can do your fancy redirect there. But we need to be able to access the individual language versions individually. Usually, that also means for the individual detail pages on the website. So the individual products or categories that you have, usually you wouldn't set that up because it's easier to just have one English version and one Polish version of those pages rather than to have a third redirecting version for every product page, every category page. So with just a Polish version and an English version, I would just make sure not to set up a redirect. And instead to use the hreflang annotations between those two versions so that we can crawl and index both of those versions independently and use hreflang to swap them out as accordingly. If you see that the wrong version of these pages is shown to users sometimes, what I would recommend doing is showing a banner on top instead of redirecting. With the banner, you're leaving it so that anyone is able to access any of those language versions, but they're still getting that information that actually the version that probably suits them better is available here. And they can just click on the link to go maybe to the Polish version or to the English version. So that's generally the recommendation I would have there. So first, remove that automatic redirect on the product page. Maybe set up something similar with a redirecting home page if you want to do it for the home page alone. And make sure to use hreflang annotations between these pages. And then finally, if you want to do something fancy, use a banner instead of a redirect. Hi, John, can you hear me? Yes. Great. So that question comes from me. I was just wondering, all things considered, I will check all the things that you said. But there's another thing with the no-index error in Search Console. So I've given an example as well for that. Search Console shows me that there is a no-index meta tag, but it's neither in the meta tags nor in the header. So I'm wondering if this is just a fluke or could this be a problem as well with all that? I don't know. I don't know where you see the no-index. So usually what I suspect would be happening here is we would. I don't know. I'd have to double check the URL. But what is probably happening here is we get confused with the canonicalization of that page. So because of the redirect primarily, which of these pages should be canonical with what? And we probably or possibly pick a URL on your website that has a no-index on it as a canonical for whatever reason. And it's not so much the matter that there's a no-index there. It's more that we get confused with the canonicalization. Therefore, we could be showing things that are not so useful for you in Search Console there. So I'd have to double check that particular URL. Is that the first one that you have in the platform? Well, there's many of them, but I've just given one example. Do you mind if I email you in a couple of days when I check all the things to follow up on this? Sure. Ideally, maybe just ping me on Twitter. And then we can do it there. OK, great. Thank you. Sure. I think with this kind of redirect thing, you could see a lot of weird results because of that, that essentially go away once the redirect is fixed or once that's removed. John, can I follow up on this? So when you have a home page that does redirect the user based on an IP address, how does Google handle ranking signals that point towards that home page? Like links, for example, most links usually go to a website's home page. Does Google forward that to the English version because Googlebot is coming from the US and it's likely that the redirect will go to the English version? Or does Google know how to handle that automatically and spread it out based on an hreflac or anything? So usually what would happen there is we would pick a canonical for that page, which would possibly be the English version. So depending on how that's set up, you can look up the canonical in Search Console with the Inspect URL tool. And that would be the URL that would essentially collect these links. Because for links, we see them as going from one canonical URL to another canonical URL. And essentially, that's how we would track that. What happens there in practice is if, say, the English version were to rank and with the hreflang annotation, we know the home page would be the better fit, then we would just swap out the URL after we put together the search results page. So basically, the ranking stays the same. We swap out the URL against the home page and do it like that. And does that happen for somebody who's from a location where a different version of the hreflac? Sure. So if you have the English-Polish version from before, if you come from Germany, then we don't have a German. But we have an xt fault, so we'll show the home page, which is the xt fault. And does the xt fault tag needs to be placed only on the initial, the first page from each language version? Would that be enough? Yeah, yeah. It's kind of weird with the xt fault version, because if you're doing this kind of automatic redirect, then you can't actually put it on the home page. But you can put it on the different language pairs. So if you have English and Polish version, you say, this is English, this is Polish, and this is xt fault. And we can't really control the xt fault version. But we trust that it's there, because you're linking to it from the different language versions. OK, but do you generally recommend not to have this kind of home page redirecting set up? I know this, for example, I think IKEA was using just a general home page where the user can just choose the language version. Is that better for Googlebot? Or is it? It works. OK, I think people do it in different ways. I don't know if there are any usability studies out there. Some people have the map set up, kind of like all the different versions link. Others have a redirect. Others just swap out the content dynamically. So that it's like you go to the home page, and it's automatically in your language, which is also nice. But I don't think from our point of view, we have any preference there. It's really more usability question. Maybe also, how many different language versions do you have? If you have 50 language versions, then maybe you want to let people pick. If you just have two, maybe that's something that you can guess better. OK, so generally speaking, just treat Googlebot as a normal user. And hreflang should do the trick as long as it's set up correctly. Yep. Awesome. Cool. OK. Can I ask a question? Sure. OK, great. Oh, can't hear you. John, I've got a quick question for you. On the website, can you hear me? Wait a second. I think Danny was trying to ask a question first. All right. Danny, we can't hear you. Oh, no. Oh, man. We'll have to get back to you, Danny. Oliver, you want to give it a try? Yeah, on the website, it's nice to meet you, John. It's my pleasure. I have a website. I'm not getting any search traffic. But what I'm doing, I'm promoting my website, Facebook, social media, social Facebook. And basically, people usually steal and copy my content on their blog. And thankfully, I'm getting links from that, because I usually place internal links on a blog. And people, I'm getting links from that. Is there any bad signals? Because these are usually the same microtechs and the same URL. They're all local, like so. They're all local links. So is there anything I need to disavow? I'm too thankful that I'm getting links. I think in general, if you're getting links from your content, that's kind of OK. The thing I would kind of keep in mind is that if these are, say, other blogs that are just copying your content, and that's because you have a link there, then probably that's not a really valuable link. So it's not going to be something that will be bad for your website. But I don't think it's something that our algorithms would say, oh, this is a fantastic site. And it's linking to your content. Therefore, your content must be fantastic. It's more like, oh, this is a pretty bad website. And it's linking to your content. It doesn't really tell us that much. It's like, we see the link. But it's not going to be kind of magical SEO power kind of link. Most of these links and most of these pages are indexed at all. I mean, if I search through URL, we won't show up at all. OK, yeah, that's, again, I guess, a sign that we don't see them as being that useful. I mean, from my point of view, I think this is fine to get started like this and to kind of build up and build a user base that if you're creating content that other people find useful enough to copy, then that's kind of good for a step. And then to kind of build up from there and kind of say, well, first I'll have like random people who kind of link to me because they think it's OK content and they copy it. And then as I grow, more people will recognize that my content is really good and then I'll start gaining more really stronger, better links as well. Thank you. Is that any better? Can you hear me now? Yes, perfect. OK, great. Thanks. I'll keep it brief. So to the extent an image can have a different alt tag based on whether it's on page A or page B, right? I mean, what image can we need different things in different contexts? So to the extent that you can assign it a different alt tag based on what page it's on, here's my question. When an image is used as a link, should the alt tag of the image be about the page that the image resides on? Or should the alt tag be about the destination page? That's wrapped in an A tag, right? And the A tag is pointing to some destination page. So the image resides on page A, but it's pointing to page B. And page A is about something. And page B is about something a little different. And the alt tag, if you asked me to write an alt tag, I could write different alt tags for page A versus page B. So does Google base the alt tag? Should it be about page A or should it be about page B? It should be about the image. So theoretically, you would see the alt attribute as something that's associated with the image in the sense that it should be describing the image. And that's kind of primarily how we use that for image search. If you're talking about an image that's essentially navigational element on a page, then maybe having an alt tag or an alt attribute that matches what the image is about, like it's a next button, for example, or it's an add to cart button, that kind of thing, then that also makes sense. We do take the alt attribute into account as a part of the anchor for that link. So that does play a little bit of a role there. But I would primarily see this as something that helps with the usability for images like that. If you're using them as navigational elements and less as something that you're kind of like passing anchor text value to the next page. If you're worried more about the anchor text, then I would just use a normal text anchor text for that and maybe also have an image next to it or on top of it or something like that. But usually, I would just use the alt attribute as a description of the image so that those who are using screen readers, for example, can still use your website normal. OK, because my sense tells me that it should be about the destination page because the images I'm talking about are not the boilerplate graphical images. They're more substantial contextual images that tell something about the page and are used to optimize the on-site elements of the page. So if you were to ask what this page is about and you're trying to build a cohesive theme, you'd want the alt tag of that image to match the theme of the entire page so that you're telling the crawler and users what this page is about. But at the same time, my thinking goes, well, it's wrapped in an A tag. And in A tag, you always write what the destination page is about to explain to the user that this A tag is going to take you here. So should the image that's wrapped in an A tag also have the same characteristics in that it should be about the destination page? I think it really depends on what you're trying to do there. I'm not completely sure. I understand your particular use case there. It sounds like it's not like an add to cart button. And also, it's not like a giant image that you want to have index an image search. It's something in between where. Yeah, imagine the simplest thing that comes to mind is maybe like a product image. And maybe in the category page, that product image, you'd want to optimize that image to be consistent with the category of those products. But that product image is a link that points to the product page. So should that product image on the category page be optimized for the category? Or should it be optimized for that product that it's pointing to? I think you could argue both directions. So because what happens from a technical point of view, when we see that kind of a setup, essentially we pull out the alt attribute and see that as a part of the page that is linking. So it's kind of like this text is seen as a part of the category page. And we also pull that out as an anchor for the page that it's linking to. So it's seen as both a part of the category page, as well as kind of descriptive around the product page that it's linking to. So it's not like a clear, like it should only be about the product page and it's also not clear, it should only be about the category page where that kind of image is on. Right. I mean, my goal is to increase the trust and authenticity of that page. So I don't want to give conflicting, inconsistent signals to the crawler where the trust in the website is lost because there's inconsistencies in the crawler. So I can't trust anything that this website is telling me because here it's the same image as pointing, using this alt tag, but it's not about the page that it's sitting on. But okay, just hearing you say that it's, there's no clear answer, kind of gives me a low piece. Okay. Thank you. I wish there were more questions that had clear answers, but sometimes they just don't. Okay, thanks. Sure. Hi, John. Hi. Does he want to go first? Chris, you're on first. Oh, fantastic. Thanks. So currently one of the websites I'm working on is having issues showing up in the Google search index. When I went to Google search console, it says the URL is not on Google. The reason for it is it's a page with a redirect. However, when I went to figure out where this redirect is or what it's redirecting from, it's saying none is detected and that the URL might be known from other sources that are currently not reported. So I'm trying to figure out at this point, I've asked around a lot, what steps I should take to figure out what to do about that. Probably easiest if you can just drop the URL in the chat here. All right, I can do that. Then I can take a quick look at that. Cool. Sometimes these are a bit tricky to figure out, especially when there's JavaScript elements involved and it's really hard to figure out what's happening. So you're saying it's not being indexed at all or? It'll come up if you specifically search for the link, but it will not come up for the keyword that I've registered under, even though when I reviewed the keyword, I'm number two in my local area for this keyword. Okay, I think that's trickier because then it is indexed and it's more a matter of ranking, but I can definitely take a quick look at that. Cool. Okay, we have like a few minutes left. If anyone wants to jump in with a last question. Yes, John, hi, how are you? Hi. So John, I have one related question about redirects and canonicals. Is it a problem if URLs are with a trailing slash in canonical tax or in refund tax, but they're redirected to the URL without trailing slash? That's usually fine. I think you have the situation that it's kind of conflicting information that you're giving us. So it's unclear exactly who will win, like with trailing slash or without. So I would try to be as consistent as possible and pick one or the other. Okay, thanks. That's one more quick question. Is there any difference in the number of SERP result when we have FAQs result compared with SERP result without FAQ result? In the most of the FAQ result in SERP, we see no more than five desktop results and otherwise normally we are see 10 results in desktop. I have no idea. I could imagine maybe they have something, but I don't know. I don't think it's like defined. Okay, thank you so much, John. Sure. Can I ask a quick question before we take off? This is an AMP Stories question. We're trying to build a pilot for one of our magazines and I'm trying to figure out the best way to place these pages for Google to find them. I'm having a hard time finding any information on that. Should we be building a sitemap, a landing page for them? I just haven't seen many examples out there other than the big news organizations. Now, I think in general, you'd want to link these like normal pages within your website. So we would see them as normal pages. I think the tricky part is sometimes there is not a lot of content on these pages, so it's hard for us to rank them, but essentially they're normal HTML pages, so you can put them in the sitemap, you can link to them internally, anything that you would do with normal HTML pages. Yeah, I think the biggest problem that I'm having is that I'm having a really hard time finding any examples out there as well, so I don't know if it hasn't been completely rolled out or it's, so what's going on? I think it's just really new, so it's something where people are experimenting with it. I think the WordPress AMP plugin has some experimental features around it as well, so I expect over time more and more people will start using them. Okay, thanks. Cool, okay, I need to take off. It looks like there's still like more stuff to be done. If you want, feel free to drop things into the other hangout in YouTube or also feel free to like post on Twitter or post in the Webmaster Help Forum. There are lots of really smart folks there as well. All right, thank you all for dropping in and I hope to see you all again one of the next times. Thanks, Joan, bye. Bye, everyone.